index
int64 0
731k
| package
stringlengths 2
98
⌀ | name
stringlengths 1
76
| docstring
stringlengths 0
281k
⌀ | code
stringlengths 4
1.07M
⌀ | signature
stringlengths 2
42.8k
⌀ |
---|---|---|---|---|---|
42,886 |
linearmodels.system.model
|
IV3SLS
|
Three-stage Least Squares (3SLS) Estimator
Parameters
----------
equations : dict
Dictionary-like structure containing dependent, exogenous, endogenous
and instrumental variables. Each key is an equations label and must
be a string. Each value must be either a tuple of the form (dependent,
exog, endog, instrument[, weights]) or a dictionary with keys "dependent",
and at least one of "exog" or "endog" and "instruments". When using a
tuple, values must be provided for all 4 variables, although either
empty arrays or `None` can be passed if a category of variable is not
included in a model. The dictionary may contain optional keys for
"exog", "endog", "instruments", and "weights". "exog" can be omitted
if all variables in an equation are endogenous. Alternatively, "exog"
can contain either an empty array or `None` to indicate that an
equation contains no exogenous regressors. Similarly "endog" and
"instruments" can either be omitted or may contain an empty array (or
`None`) if all variables in an equation are exogenous.
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If not
provided, FGLS is implemented based on an estimate of sigma.
Notes
-----
Estimates a set of regressions which are seemingly unrelated in the sense
that separate estimation would lead to consistent parameter estimates.
Each equation is of the form
.. math::
y_{i,k} = x_{i,k}\beta_i + \epsilon_{i,k}
where k denotes the equation and i denoted the observation index. By
stacking vertically arrays of dependent and placing the exogenous
variables into a block diagonal array, the entire system can be compactly
expressed as
.. math::
Y = X\beta + \epsilon
where
.. math::
Y = \left[\begin{array}{x}Y_1 \\ Y_2 \\ \vdots \\ Y_K\end{array}\right]
and
.. math::
X = \left[\begin{array}{cccc}
X_1 & 0 & \ldots & 0 \\
0 & X_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & X_K
\end{array}\right]
The system instrumental variable (IV) estimator is
.. math::
\hat{\beta}_{IV} & = (X'Z(Z'Z)^{-1}Z'X)^{-1}X'Z(Z'Z)^{-1}Z'Y \\
& = (\hat{X}'\hat{X})^{-1}\hat{X}'Y
where :math:`\hat{X} = Z(Z'Z)^{-1}Z'X` and. When certain conditions are
satisfied, a GLS estimator of the form
.. math::
\hat{\beta}_{3SLS} = (\hat{X}'\Omega^{-1}\hat{X})^{-1}\hat{X}'\Omega^{-1}Y
can improve accuracy of coefficient estimates where
.. math::
\Omega = \Sigma \otimes I_N
where :math:`\Sigma` is the covariance matrix of the residuals.
|
class IV3SLS(_LSSystemModelBase):
r"""
Three-stage Least Squares (3SLS) Estimator
Parameters
----------
equations : dict
Dictionary-like structure containing dependent, exogenous, endogenous
and instrumental variables. Each key is an equations label and must
be a string. Each value must be either a tuple of the form (dependent,
exog, endog, instrument[, weights]) or a dictionary with keys "dependent",
and at least one of "exog" or "endog" and "instruments". When using a
tuple, values must be provided for all 4 variables, although either
empty arrays or `None` can be passed if a category of variable is not
included in a model. The dictionary may contain optional keys for
"exog", "endog", "instruments", and "weights". "exog" can be omitted
if all variables in an equation are endogenous. Alternatively, "exog"
can contain either an empty array or `None` to indicate that an
equation contains no exogenous regressors. Similarly "endog" and
"instruments" can either be omitted or may contain an empty array (or
`None`) if all variables in an equation are exogenous.
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If not
provided, FGLS is implemented based on an estimate of sigma.
Notes
-----
Estimates a set of regressions which are seemingly unrelated in the sense
that separate estimation would lead to consistent parameter estimates.
Each equation is of the form
.. math::
y_{i,k} = x_{i,k}\beta_i + \epsilon_{i,k}
where k denotes the equation and i denoted the observation index. By
stacking vertically arrays of dependent and placing the exogenous
variables into a block diagonal array, the entire system can be compactly
expressed as
.. math::
Y = X\beta + \epsilon
where
.. math::
Y = \left[\begin{array}{x}Y_1 \\ Y_2 \\ \vdots \\ Y_K\end{array}\right]
and
.. math::
X = \left[\begin{array}{cccc}
X_1 & 0 & \ldots & 0 \\
0 & X_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & X_K
\end{array}\right]
The system instrumental variable (IV) estimator is
.. math::
\hat{\beta}_{IV} & = (X'Z(Z'Z)^{-1}Z'X)^{-1}X'Z(Z'Z)^{-1}Z'Y \\
& = (\hat{X}'\hat{X})^{-1}\hat{X}'Y
where :math:`\hat{X} = Z(Z'Z)^{-1}Z'X` and. When certain conditions are
satisfied, a GLS estimator of the form
.. math::
\hat{\beta}_{3SLS} = (\hat{X}'\Omega^{-1}\hat{X})^{-1}\hat{X}'\Omega^{-1}Y
can improve accuracy of coefficient estimates where
.. math::
\Omega = \Sigma \otimes I_N
where :math:`\Sigma` is the covariance matrix of the residuals.
"""
def __init__(
self,
equations: Mapping[
str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]
],
*,
sigma: ArrayLike | None = None,
) -> None:
super().__init__(equations, sigma=sigma)
@classmethod
def multivariate_iv(
cls,
dependent: ArrayLike,
exog: ArrayLike | None = None,
endog: ArrayLike | None = None,
instruments: ArrayLike | None = None,
) -> IV3SLS:
"""
Interface for specification of multivariate IV models
Parameters
----------
dependent : array_like
nobs by ndep array of dependent variables
exog : array_like
nobs by nexog array of exogenous regressors common to all models
endog : array_like
nobs by nendog array of endogenous regressors common to all models
instruments : array_like
nobs by ninstr array of instruments to use in all equations
Returns
-------
model : IV3SLS
Model instance
Notes
-----
At least one of exog or endog must be provided.
Utility function to simplify the construction of multivariate IV
models which all use the same regressors and instruments. Constructs
the dictionary of equations from the variables using the common
exogenous, endogenous and instrumental variables.
"""
equations = {}
dependent_ivd = IVData(dependent, var_name="dependent")
if exog is None and endog is None:
raise ValueError("At least one of exog or endog must be provided")
exog_ivd = IVData(exog, var_name="exog")
endog_ivd = IVData(endog, var_name="endog", nobs=dependent.shape[0])
instr_ivd = IVData(instruments, var_name="instruments", nobs=dependent.shape[0])
for col in dependent_ivd.pandas:
equations[str(col)] = {
# TODO: Bug in pandas-stubs
# https://github.com/pandas-dev/pandas-stubs/issues/97
"dependent": dependent_ivd.pandas[[col]],
"exog": exog_ivd.pandas,
"endog": endog_ivd.pandas,
"instruments": instr_ivd.pandas,
}
return cls(equations)
@classmethod
def from_formula(
cls,
formula: str | dict[str, str],
data: DataFrame,
*,
sigma: ArrayLike | None = None,
weights: Mapping[str, ArrayLike] | None = None,
) -> IV3SLS:
"""
Specify a 3SLS using the formula interface
Parameters
----------
formula : {str, dict-like}
Either a string or a dictionary of strings where each value in
the dictionary represents a single equation. See Notes for a
description of the accepted syntax
data : DataFrame
Frame containing named variables
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If
not provided, FGLS is implemented based on an estimate of sigma.
weights : dict-like
Dictionary like object (e.g. a DataFrame) containing variable
weights. Each entry must have the same number of observations as
data. If an equation label is not a key weights, the weights will
be set to unity
Returns
-------
model : IV3SLS
Model instance
Notes
-----
Models can be specified in one of two ways. The first uses curly
braces to encapsulate equations. The second uses a dictionary
where each key is an equation name.
Examples
--------
The simplest format uses standard formulas for each equation
in a dictionary. Best practice is to use an Ordered Dictionary
>>> import pandas as pd
>>> import numpy as np
>>> cols = ["y1", "x1_1", "x1_2", "z1", "y2", "x2_1", "x2_2", "z2"]
>>> data = pd.DataFrame(np.random.randn(500, 8), columns=cols)
>>> from linearmodels.system import IV3SLS
>>> formula = {"eq1": "y1 ~ 1 + x1_1 + [x1_2 ~ z1]",
... "eq2": "y2 ~ 1 + x2_1 + [x2_2 ~ z2]"}
>>> mod = IV3SLS.from_formula(formula, data)
The second format uses curly braces {} to surround distinct equations
>>> formula = "{y1 ~ 1 + x1_1 + [x1_2 ~ z1]} {y2 ~ 1 + x2_1 + [x2_2 ~ z2]}"
>>> mod = IV3SLS.from_formula(formula, data)
It is also possible to include equation labels when using curly braces
>>> formula = "{eq1: y1 ~ x1_1 + [x1_2 ~ z1]} {eq2: y2 ~ 1 + [x2_2 ~ z2]}"
>>> mod = IV3SLS.from_formula(formula, data)
"""
context = capture_context(1)
parser = SystemFormulaParser(formula, data, weights, context=context)
eqns = parser.data
mod = cls(eqns, sigma=sigma)
mod.formula = formula
return mod
|
(equations: 'Mapping[str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]]', *, sigma: 'ArrayLike | None' = None) -> 'None'
|
42,887 |
linearmodels.system.model
|
__init__
| null |
def __init__(
self,
equations: Mapping[
str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]
],
*,
sigma: ArrayLike | None = None,
) -> None:
super().__init__(equations, sigma=sigma)
|
(self, equations: collections.abc.Mapping[str, collections.abc.Mapping[str, typing.Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]] | collections.abc.Sequence[typing.Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]]], *, sigma: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None) -> NoneType
|
42,888 |
linearmodels.system.model
|
__repr__
| null |
def __repr__(self) -> str:
return self.__str__() + f"\nid: {hex(id(self))}"
|
(self) -> str
|
42,889 |
linearmodels.system.model
|
__str__
| null |
def __str__(self) -> str:
out = self._model_name + ", "
out += f"{len(self._y)} Equations:\n"
eqns = ", ".join(self._equations.keys())
out += "\n".join(textwrap.wrap(eqns, 70))
if self._common_exog:
out += "\nCommon Exogenous Variables"
return out
|
(self) -> str
|
42,890 |
linearmodels.system.model
|
_common_indiv_results
| null |
def _common_indiv_results(
self,
index: int,
beta: Float64Array,
cov: Float64Array,
wresid: Float64Array,
resid: Float64Array,
method: str,
cov_type: str,
cov_est: (
HomoskedasticCovariance
| HeteroskedasticCovariance
| KernelCovariance
| ClusteredCovariance
| GMMHeteroskedasticCovariance
| GMMHomoskedasticCovariance
),
iter_count: int,
debiased: bool,
constant: bool,
total_ss: float,
*,
weight_est: None | (
HomoskedasticWeightMatrix | HeteroskedasticWeightMatrix | KernelWeightMatrix
) = None,
) -> AttrDict:
loc = 0
for i in range(index):
loc += self._wx[i].shape[1]
i = index
stats = AttrDict()
# Static properties
stats["eq_label"] = self._eq_labels[i]
stats["dependent"] = self._dependent[i].cols[0]
stats["instruments"] = (
self._instr[i].cols if self._instr[i].shape[1] > 0 else None
)
stats["endog"] = self._endog[i].cols if self._endog[i].shape[1] > 0 else None
stats["method"] = method
stats["cov_type"] = cov_type
stats["cov_estimator"] = cov_est
stats["cov_config"] = cov_est.cov_config
stats["weight_estimator"] = weight_est
stats["index"] = self._dependent[i].rows
stats["original_index"] = self._original_index
stats["iter"] = iter_count
stats["debiased"] = debiased
stats["has_constant"] = constant
assert self._constant_loc is not None
stats["constant_loc"] = self._constant_loc[i]
# Parameters, errors and measures of fit
wxi = self._wx[i]
nobs, df = wxi.shape
b = beta[loc : loc + df]
e = wresid[:, [i]]
nobs = e.shape[0]
df_c = nobs - int(constant)
df_r = nobs - df
stats["params"] = b
stats["cov"] = cov[loc : loc + df, loc : loc + df]
stats["wresid"] = e
stats["nobs"] = nobs
stats["df_model"] = df
stats["resid"] = resid[:, [i]]
stats["fitted"] = self._x[i] @ b
stats["resid_ss"] = float(np.squeeze(resid[:, [i]].T @ resid[:, [i]]))
stats["total_ss"] = total_ss
stats["r2"] = 1.0 - stats.resid_ss / stats.total_ss
stats["r2a"] = 1.0 - (stats.resid_ss / df_r) / (stats.total_ss / df_c)
names = self._param_names[loc : loc + df]
offset = len(stats.eq_label) + 1
stats["param_names"] = [n[offset:] for n in names]
# F-statistic
stats["f_stat"] = self._f_stat(stats, debiased)
return stats
|
(self, index: int, beta: numpy.ndarray, cov: numpy.ndarray, wresid: numpy.ndarray, resid: numpy.ndarray, method: str, cov_type: str, cov_est: linearmodels.system.covariance.HomoskedasticCovariance | linearmodels.system.covariance.HeteroskedasticCovariance | linearmodels.system.covariance.KernelCovariance | linearmodels.system.covariance.ClusteredCovariance | linearmodels.system.covariance.GMMHeteroskedasticCovariance | linearmodels.system.covariance.GMMHomoskedasticCovariance, iter_count: int, debiased: bool, constant: bool, total_ss: float, *, weight_est: Union[NoneType, linearmodels.system.gmm.HomoskedasticWeightMatrix, linearmodels.system.gmm.HeteroskedasticWeightMatrix, linearmodels.system.gmm.KernelWeightMatrix] = None) -> linearmodels.shared.utility.AttrDict
|
42,891 |
linearmodels.system.model
|
_common_results
| null |
def _common_results(
self,
beta: Float64Array,
cov: Float64Array,
method: str,
iter_count: int,
nobs: int,
cov_type: str,
sigma: Float64Array,
individual: AttrDict,
debiased: bool,
) -> AttrDict:
results = AttrDict()
results["method"] = method
results["iter"] = iter_count
results["nobs"] = nobs
results["cov_type"] = cov_type
results["index"] = self._dependent[0].rows
results["original_index"] = self._original_index
names = list(individual.keys())
results["sigma"] = DataFrame(sigma, columns=names, index=names)
results["individual"] = individual
results["params"] = beta
results["df_model"] = beta.shape[0]
results["param_names"] = self._param_names
results["cov"] = cov
results["debiased"] = debiased
total_ss = resid_ss = 0.0
residuals = []
for key in individual:
total_ss += individual[key].total_ss
resid_ss += individual[key].resid_ss
residuals.append(individual[key].resid)
resid = np.hstack(residuals)
results["resid_ss"] = resid_ss
results["total_ss"] = total_ss
results["r2"] = 1.0 - results.resid_ss / results.total_ss
results["resid"] = resid
results["constraints"] = self._constraints
results["model"] = self
x = self._x
k = len(x)
loc = 0
fitted_vals = []
for i in range(k):
nb = x[i].shape[1]
b = beta[loc : loc + nb]
fitted_vals.append(x[i] @ b)
loc += nb
fitted = np.hstack(fitted_vals)
results["fitted"] = fitted
return results
|
(self, beta: numpy.ndarray, cov: numpy.ndarray, method: str, iter_count: int, nobs: int, cov_type: str, sigma: numpy.ndarray, individual: linearmodels.shared.utility.AttrDict, debiased: bool) -> linearmodels.shared.utility.AttrDict
|
42,892 |
linearmodels.system.model
|
_construct_xhat
| null |
def _construct_xhat(self) -> None:
k = len(self._x)
self._xhat = []
self._wxhat = []
for i in range(k):
x, z = self._x[i], self._z[i]
if z.shape == x.shape and np.all(z == x):
# OLS, no instruments
self._xhat.append(x)
self._wxhat.append(self._wx[i])
else:
delta = lstsq(z, x, rcond=None)[0]
xhat = z @ delta
self._xhat.append(xhat)
w = self._w[i]
self._wxhat.append(xhat * np.sqrt(w))
|
(self) -> NoneType
|
42,893 |
linearmodels.system.model
|
_drop_missing
| null |
def _drop_missing(self) -> None:
k = len(self._dependent)
nobs = self._dependent[0].shape[0]
self._original_index = Index(self._dependent[0].rows)
missing = np.zeros(nobs, dtype=bool)
values = [self._dependent, self._exog, self._endog, self._instr, self._weights]
for i in range(k):
for value in values:
nulls = value[i].isnull
if nulls.any():
missing |= np.asarray(nulls)
missing_warning(missing, stacklevel=4)
if np.any(missing):
for i in range(k):
self._dependent[i].drop(missing)
self._exog[i].drop(missing)
self._endog[i].drop(missing)
self._instr[i].drop(missing)
self._weights[i].drop(missing)
|
(self) -> NoneType
|
42,894 |
linearmodels.system.model
|
_f_stat
| null |
def _f_stat(
self, stats: AttrDict, debiased: bool
) -> WaldTestStatistic | InvalidTestStatistic:
cov = stats.cov
k = cov.shape[0]
sel = list(range(k))
if stats.has_constant:
sel.pop(stats.constant_loc)
cov = cov[sel][:, sel]
params = stats.params[sel]
df = params.shape[0]
nobs = stats.nobs
null = "All parameters ex. constant are zero"
name = "Equation F-statistic"
try:
stat = float(np.squeeze(params.T @ inv(cov) @ params))
except np.linalg.LinAlgError:
return InvalidTestStatistic(
"Covariance is singular, possibly due " "to constraints.", name=name
)
if debiased:
total_reg = np.sum([s.shape[1] for s in self._wx])
df_denom = len(self._wx) * nobs - total_reg
wald = WaldTestStatistic(stat / df, null, df, df_denom=df_denom, name=name)
else:
return WaldTestStatistic(stat, null=null, df=df, name=name)
return wald
|
(self, stats: linearmodels.shared.utility.AttrDict, debiased: bool) -> linearmodels.shared.hypotheses.WaldTestStatistic | linearmodels.shared.hypotheses.InvalidTestStatistic
|
42,895 |
linearmodels.system.model
|
_gls_estimate
|
Core estimation routine for iterative GLS
|
def _gls_estimate(
self,
eps: Float64Array,
nobs: int,
total_cols: int,
ci: Sequence[int],
full_cov: bool,
debiased: bool,
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array]:
"""Core estimation routine for iterative GLS"""
wy, wx, wxhat = self._wy, self._wx, self._wxhat
if self._sigma is None:
sigma = eps.T @ eps / nobs
sigma *= self._sigma_scale(debiased)
else:
sigma = self._sigma
est_sigma = sigma
if not full_cov:
sigma = np.diag(np.diag(sigma))
sigma_inv = inv(sigma)
k = len(wy)
xpx = blocked_inner_prod(wxhat, sigma_inv)
xpy = np.zeros((total_cols, 1))
for i in range(k):
sy = np.zeros((nobs, 1))
for j in range(k):
sy += sigma_inv[i, j] * wy[j]
xpy[ci[i] : ci[i + 1]] = wxhat[i].T @ sy
beta = _parameters_from_xprod(xpx, xpy, constraints=self.constraints)
loc = 0
for j in range(k):
_wx = wx[j]
_wy = wy[j]
kx = _wx.shape[1]
eps[:, [j]] = _wy - _wx @ beta[loc : loc + kx]
loc += kx
return beta, eps, sigma, est_sigma
|
(self, eps: numpy.ndarray, nobs: int, total_cols: int, ci: collections.abc.Sequence[int], full_cov: bool, debiased: bool) -> tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray]
|
42,896 |
linearmodels.system.model
|
_gls_finalize
|
Collect results to return after GLS estimation
|
def _gls_finalize(
self,
beta: Float64Array,
sigma: Float64Array,
full_sigma: Float64Array,
est_sigma: Float64Array,
gls_eps: Float64Array,
eps: Float64Array,
full_cov: bool,
cov_type: str,
iter_count: int,
**cov_config: bool,
) -> SystemResults:
"""Collect results to return after GLS estimation"""
k = len(self._wy)
# Covariance estimation
cov_estimator = COV_EST[cov_type]
gls_eps = np.reshape(gls_eps, (k, gls_eps.shape[0] // k)).T
eps = np.reshape(eps, (k, eps.shape[0] // k)).T
cov_est = cov_estimator(
self._wxhat,
eps,
sigma,
full_sigma,
gls=True,
constraints=self._constraints,
**cov_config,
)
cov = cov_est.cov
# Repackage results for individual equations
individual = AttrDict()
debiased = cov_config.get("debiased", False)
method = "Iterative GLS" if iter_count > 1 else "GLS"
for i in range(k):
cons = bool(self.has_constant.iloc[i])
if cons:
c = np.sqrt(self._w[i])
ye = self._wy[i] - c @ lstsq(c, self._wy[i], rcond=None)[0]
else:
ye = self._wy[i]
total_ss = float(np.squeeze(ye.T @ ye))
stats = self._common_indiv_results(
i,
beta,
cov,
gls_eps,
eps,
method,
cov_type,
cov_est,
iter_count,
debiased,
cons,
total_ss,
)
key = self._eq_labels[i]
individual[key] = stats
# Populate results dictionary
nobs = eps.size
results = self._common_results(
beta,
cov,
method,
iter_count,
nobs,
cov_type,
est_sigma,
individual,
debiased,
)
# wresid is different between GLS and OLS
wresiduals = []
for individual_key in individual:
wresiduals.append(individual[individual_key].wresid)
wresid = np.hstack(wresiduals)
results["wresid"] = wresid
results["cov_estimator"] = cov_est
results["cov_config"] = cov_est.cov_config
individual = results["individual"]
r2s = [individual[eq].r2 for eq in individual]
results["system_r2"] = self._system_r2(
eps, sigma, "gls", full_cov, debiased, r2s
)
return SystemResults(results)
|
(self, beta: numpy.ndarray, sigma: numpy.ndarray, full_sigma: numpy.ndarray, est_sigma: numpy.ndarray, gls_eps: numpy.ndarray, eps: numpy.ndarray, full_cov: bool, cov_type: str, iter_count: int, **cov_config: bool) -> linearmodels.system.results.SystemResults
|
42,897 |
linearmodels.system.model
|
_multivariate_ls_finalize
| null |
def _multivariate_ls_finalize(
self,
beta: Float64Array,
eps: Float64Array,
sigma: Float64Array,
cov_type: str,
**cov_config: bool,
) -> SystemResults:
k = len(self._wx)
# Covariance estimation
cov_estimator = COV_EST[cov_type]
cov_est = cov_estimator(
self._wxhat,
eps,
sigma,
sigma,
gls=False,
constraints=self._constraints,
**cov_config,
)
cov = cov_est.cov
individual = AttrDict()
debiased = cov_config.get("debiased", False)
for i in range(k):
wy = wye = self._wy[i]
w = self._w[i]
cons = bool(self.has_constant.iloc[i])
if cons:
wc = np.ones_like(wy) * np.sqrt(w)
wye = wy - wc @ lstsq(wc, wy, rcond=None)[0]
total_ss = float(np.squeeze(wye.T @ wye))
stats = self._common_indiv_results(
i,
beta,
cov,
eps,
eps,
"OLS",
cov_type,
cov_est,
0,
debiased,
cons,
total_ss,
)
key = self._eq_labels[i]
individual[key] = stats
nobs = eps.size
results = self._common_results(
beta, cov, "OLS", 0, nobs, cov_type, sigma, individual, debiased
)
results["wresid"] = results.resid
results["cov_estimator"] = cov_est
results["cov_config"] = cov_est.cov_config
individual = results["individual"]
r2s = [individual[eq].r2 for eq in individual]
results["system_r2"] = self._system_r2(eps, sigma, "ols", False, debiased, r2s)
return SystemResults(results)
|
(self, beta: numpy.ndarray, eps: numpy.ndarray, sigma: numpy.ndarray, cov_type: str, **cov_config: bool) -> linearmodels.system.results.SystemResults
|
42,898 |
linearmodels.system.model
|
_multivariate_ls_fit
| null |
def _multivariate_ls_fit(self) -> tuple[Float64Array, Float64Array]:
wy, wx, wxhat = self._wy, self._wx, self._wxhat
k = len(wxhat)
xpx = blocked_inner_prod(wxhat, np.eye(len(wxhat)))
_xpy = []
for i in range(k):
_xpy.append(wxhat[i].T @ wy[i])
xpy = np.vstack(_xpy)
beta = _parameters_from_xprod(xpx, xpy, constraints=self.constraints)
loc = 0
eps = []
for i in range(k):
nb = wx[i].shape[1]
b = beta[loc : loc + nb]
eps.append(wy[i] - wx[i] @ b)
loc += nb
eps_arr = np.hstack(eps)
return beta, eps_arr
|
(self) -> tuple[numpy.ndarray, numpy.ndarray]
|
42,899 |
linearmodels.system.model
|
_sigma_scale
| null |
def _sigma_scale(self, debiased: bool) -> float | Float64Array:
if not debiased:
return 1.0
nobs = float(self._wx[0].shape[0])
scales = np.array([nobs - x.shape[1] for x in self._wx], dtype=np.float64)
scales = cast(Float64Array, np.sqrt(nobs / scales))
return scales[:, None] @ scales[None, :]
|
(self, debiased: bool) -> float | numpy.ndarray
|
42,900 |
linearmodels.system.model
|
_system_r2
| null |
def _system_r2(
self,
eps: Float64Array,
sigma: Float64Array,
method: str,
full_cov: bool,
debiased: bool,
r2s: Sequence[float],
) -> Series:
sigma_resid = sigma
# System regression on a constant using weights if provided
wy, w = self._wy, self._w
wi = [cast(Float64Array, np.sqrt(weights)) for weights in w]
if method == "ols":
est_sigma = np.eye(len(wy))
else: # gls
est_sigma = sigma
if not full_cov:
est_sigma = np.diag(np.diag(est_sigma))
est_sigma_inv = inv(est_sigma)
nobs = wy[0].shape[0]
k = len(wy)
xpx = blocked_inner_prod(wi, est_sigma_inv)
xpy = np.zeros((k, 1))
for i in range(k):
sy = np.zeros((nobs, 1))
for j in range(k):
sy += est_sigma_inv[i, j] * wy[j]
xpy[i : (i + 1)] = wi[i].T @ sy
mu = _parameters_from_xprod(xpx, xpy)
eps_const = np.hstack([self._y[j] - mu[j] for j in range(k)])
# Judge
judge = 1 - (eps**2).sum() / (eps_const**2).sum()
# Dhrymes
tot_eps_const_sq = (eps_const**2).sum(0)
r2s_arr = np.asarray(r2s)
dhrymes = (r2s_arr * tot_eps_const_sq).sum() / tot_eps_const_sq.sum()
# Berndt
sigma_y = (eps_const.T @ eps_const / nobs) * self._sigma_scale(debiased)
berndt = np.nan
# Avoid division by 0
if np.linalg.det(sigma_y) > 0:
berndt = 1 - np.linalg.det(sigma_resid) / np.linalg.det(sigma_y)
mcelroy = np.nan
# Check that the matrix is invertible
if np.linalg.matrix_rank(sigma) == sigma.shape[0]:
# McElroy
sigma_m12 = inv_matrix_sqrt(sigma)
std_eps = eps @ sigma_m12
numerator = (std_eps**2).sum()
std_eps_const = eps_const @ sigma_m12
denom = (std_eps_const**2).sum()
mcelroy = 1.0 - numerator / denom
r2 = dict(mcelroy=mcelroy, berndt=berndt, judge=judge, dhrymes=dhrymes)
return Series(r2)
|
(self, eps: numpy.ndarray, sigma: numpy.ndarray, method: str, full_cov: bool, debiased: bool, r2s: collections.abc.Sequence[float]) -> pandas.core.series.Series
|
42,901 |
linearmodels.system.model
|
_validate_data
| null |
def _validate_data(self) -> None:
ids = []
for i, key in enumerate(self._equations):
self._eq_labels.append(key)
eq_data = self._equations[key]
dep_name = "dependent_" + str(i)
exog_name = "exog_" + str(i)
endog_name = "endog_" + str(i)
instr_name = "instr_" + str(i)
if isinstance(eq_data, (tuple, list)):
dep = IVData(eq_data[0], var_name=dep_name)
self._dependent.append(dep)
current_id: tuple[int, ...] = (id(eq_data[1]),)
self._exog.append(
IVData(eq_data[1], var_name=exog_name, nobs=dep.shape[0])
)
endog = IVData(eq_data[2], var_name=endog_name, nobs=dep.shape[0])
if endog.shape[1] > 0:
current_id += (id(eq_data[2]),)
ids.append(current_id)
self._endog.append(endog)
self._instr.append(
IVData(eq_data[3], var_name=instr_name, nobs=dep.shape[0])
)
if len(eq_data) == 5:
self._weights.append(IVData(eq_data[4]))
else:
dep_shape = self._dependent[-1].shape
self._weights.append(IVData(np.ones(dep_shape)))
elif isinstance(eq_data, (dict, Mapping)):
dep = IVData(eq_data["dependent"], var_name=dep_name)
self._dependent.append(dep)
exog = eq_data.get("exog", None)
self._exog.append(IVData(exog, var_name=exog_name, nobs=dep.shape[0]))
current_id = (id(exog),)
endog_values = eq_data.get("endog", None)
endog = IVData(endog_values, var_name=endog_name, nobs=dep.shape[0])
self._endog.append(endog)
if "endog" in eq_data:
current_id += (id(eq_data["endog"]),)
ids.append(current_id)
instr_values = eq_data.get("instruments", None)
instr = IVData(instr_values, var_name=instr_name, nobs=dep.shape[0])
self._instr.append(instr)
if "weights" in eq_data:
self._weights.append(IVData(eq_data["weights"]))
else:
self._weights.append(IVData(np.ones(dep.shape)))
else:
msg = UNKNOWN_EQ_TYPE.format(key=key, type=type(vars))
raise TypeError(msg)
self._has_instruments = False
for instr in self._instr:
self._has_instruments = self._has_instruments or (instr.shape[1] > 1)
for i, comps in enumerate(
zip(self._dependent, self._exog, self._endog, self._instr, self._weights)
):
shapes = [a.shape[0] for a in comps]
if min(shapes) != max(shapes):
raise ValueError(
"Dependent, exogenous, endogenous and "
"instruments, and weights, if provided, do "
"not have the same number of observations in "
"{eq}".format(eq=self._eq_labels[i])
)
self._drop_missing()
self._common_exog = len(set(ids)) == 1
if self._common_exog:
# Common exog requires weights are also equal
w0 = self._weights[0].ndarray
for w in self._weights:
self._common_exog = self._common_exog and bool(np.all(w.ndarray == w0))
constant = []
constant_loc = []
exog_ivd: IVData
for dep, exog_ivd, endog, instr, w, label in zip(
self._dependent,
self._exog,
self._endog,
self._instr,
self._weights,
self._eq_labels,
):
y = cast(Float64Array, dep.ndarray)
x = np.concatenate([exog_ivd.ndarray, endog.ndarray], 1, dtype=float)
z = np.concatenate([exog_ivd.ndarray, instr.ndarray], 1, dtype=float)
w_arr = cast(Float64Array, w.ndarray)
w_arr = w_arr / np.nanmean(w_arr)
w_sqrt = np.sqrt(w_arr)
self._w.append(w_arr)
self._y.append(y)
self._x.append(x)
self._z.append(z)
self._wy.append(y * w_sqrt)
self._wx.append(x * w_sqrt)
self._wz.append(z * w_sqrt)
cols = list(exog_ivd.cols) + list(endog.cols)
self._param_names.extend([label + "_" + col for col in cols])
if y.shape[0] <= x.shape[1]:
raise ValueError(
"Fewer observations than variables in "
"equation {eq}".format(eq=label)
)
if matrix_rank(x) < x.shape[1]:
raise ValueError(
"Equation {eq} regressor array is not full " "rank".format(eq=label)
)
if x.shape[1] > z.shape[1]:
raise ValueError(
"Equation {eq} has fewer instruments than "
"endogenous variables.".format(eq=label)
)
if z.shape[1] > z.shape[0]:
raise ValueError(
"Fewer observations than instruments in "
"equation {eq}".format(eq=label)
)
if matrix_rank(z) < z.shape[1]:
raise ValueError(
"Equation {eq} instrument array is full " "rank".format(eq=label)
)
for rhs in self._x:
const, const_loc = has_constant(rhs)
constant.append(const)
constant_loc.append(const_loc)
self._has_constant = Series(
constant, index=[d.cols[0] for d in self._dependent]
)
self._constant_loc = constant_loc
|
(self) -> NoneType
|
42,902 |
linearmodels.system.model
|
add_constraints
|
Add parameter constraints to a model.
Parameters
----------
r : DataFrame
Constraint matrix. nconstraints by nparameters
q : Series
Constraint values (nconstraints). If not set, set to 0
Notes
-----
Constraints are of the form
.. math ::
r \beta = q
The property `param_names` can be used to determine the order of
parameters.
|
def add_constraints(self, r: DataFrame, q: Series | None = None) -> None:
r"""
Add parameter constraints to a model.
Parameters
----------
r : DataFrame
Constraint matrix. nconstraints by nparameters
q : Series
Constraint values (nconstraints). If not set, set to 0
Notes
-----
Constraints are of the form
.. math ::
r \beta = q
The property `param_names` can be used to determine the order of
parameters.
"""
self._constraints = LinearConstraint(
r, q=q, num_params=len(self._param_names), require_pandas=True
)
|
(self, r: pandas.core.frame.DataFrame, q: Optional[pandas.core.series.Series] = None) -> NoneType
|
42,903 |
linearmodels.system.model
|
fit
|
Estimate model parameters
Parameters
----------
method : {None, "gls", "ols"}
Estimation method. Default auto selects based on regressors,
using OLS only if all regressors are identical. The other two
arguments force the use of GLS or OLS.
full_cov : bool
Flag indicating whether to utilize information in correlations
when estimating the model with GLS
iterate : bool
Flag indicating to iterate GLS until convergence of iter limit
iterations have been completed
iter_limit : int
Maximum number of iterations for iterative GLS
tol : float
Tolerance to use when checking for convergence in iterative GLS
cov_type : str
Name of covariance estimator. Valid options are
* "unadjusted", "homoskedastic" - Classic covariance estimator
* "robust", "heteroskedastic" - Heteroskedasticity robust
covariance estimator
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "clustered" - Allows for 1 and 2-way clustering of errors
(Rogers).
**cov_config
Additional parameters to pass to covariance estimator. All
estimators support debiased which employs a small-sample adjustment
Returns
-------
results : SystemResults
Estimation results
See Also
--------
linearmodels.system.covariance.HomoskedasticCovariance
linearmodels.system.covariance.HeteroskedasticCovariance
linearmodels.system.covariance.KernelCovariance
linearmodels.system.covariance.ClusteredCovariance
|
def fit(
self,
*,
method: Literal["ols", "gls", None] = None,
full_cov: bool = True,
iterate: bool = False,
iter_limit: int = 100,
tol: float = 1e-6,
cov_type: str = "robust",
**cov_config: bool,
) -> SystemResults:
"""
Estimate model parameters
Parameters
----------
method : {None, "gls", "ols"}
Estimation method. Default auto selects based on regressors,
using OLS only if all regressors are identical. The other two
arguments force the use of GLS or OLS.
full_cov : bool
Flag indicating whether to utilize information in correlations
when estimating the model with GLS
iterate : bool
Flag indicating to iterate GLS until convergence of iter limit
iterations have been completed
iter_limit : int
Maximum number of iterations for iterative GLS
tol : float
Tolerance to use when checking for convergence in iterative GLS
cov_type : str
Name of covariance estimator. Valid options are
* "unadjusted", "homoskedastic" - Classic covariance estimator
* "robust", "heteroskedastic" - Heteroskedasticity robust
covariance estimator
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "clustered" - Allows for 1 and 2-way clustering of errors
(Rogers).
**cov_config
Additional parameters to pass to covariance estimator. All
estimators support debiased which employs a small-sample adjustment
Returns
-------
results : SystemResults
Estimation results
See Also
--------
linearmodels.system.covariance.HomoskedasticCovariance
linearmodels.system.covariance.HeteroskedasticCovariance
linearmodels.system.covariance.KernelCovariance
linearmodels.system.covariance.ClusteredCovariance
"""
if method is None:
method = (
"ols" if (self._common_exog and self._constraints is None) else "gls"
)
else:
if method.lower() not in ("ols", "gls"):
raise ValueError(
f"method must be 'ols' or 'gls' when not None. Got {method}."
)
method = cast(Literal["ols", "gls"], method.lower())
cov_type = cov_type.lower()
if cov_type not in COV_TYPES:
raise ValueError(f"Unknown cov_type: {cov_type}")
cov_type = COV_TYPES[cov_type]
k = len(self._dependent)
col_sizes = [0] + [v.shape[1] for v in self._x]
col_idx = [int(i) for i in np.cumsum(col_sizes)]
total_cols = col_idx[-1]
self._construct_xhat()
beta, eps = self._multivariate_ls_fit()
nobs = eps.shape[0]
debiased = cov_config.get("debiased", False)
full_sigma = sigma = (eps.T @ eps / nobs) * self._sigma_scale(debiased)
if method == "ols":
return self._multivariate_ls_finalize(
beta, eps, sigma, cov_type, **cov_config
)
beta_hist = [beta]
nobs = eps.shape[0]
iter_count = 0
delta = np.inf
while (
(iter_count < iter_limit and iterate) or iter_count == 0
) and delta >= tol:
beta, eps, sigma, est_sigma = self._gls_estimate(
eps, nobs, total_cols, col_idx, full_cov, debiased
)
beta_hist.append(beta)
diff = beta_hist[-1] - beta_hist[-2]
delta = float(np.sqrt(np.mean(diff**2)))
iter_count += 1
sigma_m12 = inv_matrix_sqrt(sigma)
wy = blocked_column_product(self._wy, sigma_m12)
wx = blocked_diag_product(self._wx, sigma_m12)
gls_eps = wy - wx @ beta
y = blocked_column_product(self._y, np.eye(k))
x = blocked_diag_product(self._x, np.eye(k))
eps = y - x @ beta
return self._gls_finalize(
beta,
sigma,
full_sigma,
est_sigma,
gls_eps,
eps,
full_cov,
cov_type,
iter_count,
**cov_config,
)
|
(self, *, method: Optional[Literal['ols', 'gls', None]] = None, full_cov: bool = True, iterate: bool = False, iter_limit: int = 100, tol: float = 1e-06, cov_type: str = 'robust', **cov_config: bool) -> linearmodels.system.results.SystemResults
|
42,904 |
linearmodels.system.model
|
predict
|
Predict values for additional data
Parameters
----------
params : array_like
Model parameters (nvar by 1)
equations : dict
Dictionary-like structure containing exogenous and endogenous
variables. Each key is an equations label and must
match the labels used to fir the model. Each value must be a
dictionary with keys "exog" and "endog". If predictions are not
required for one of more of the model equations, these keys can
be omitted.
data : DataFrame
Values to use when making predictions from a model constructed
from a formula
eval_env : int
Depth to use when evaluating formulas.
Returns
-------
predictions : DataFrame
Fitted values from supplied data and parameters
Notes
-----
If `data` is not none, then `equations` must be none.
Predictions from models constructed using formulas can
be computed using either `equations`, which will treat these are
arrays of values corresponding to the formula-process data, or using
`data` which will be processed using the formula used to construct the
values corresponding to the original model specification.
When using `exog` and `endog`, the regressor array for a particular
equation is assembled as
`[equations[eqn]["exog"], equations[eqn]["endog"]]` where `eqn` is
an equation label. These must correspond to the columns in the
estimated model.
|
def predict(
self,
params: ArrayLike,
*,
equations: Mapping[str, Mapping[str, ArrayLike]] | None = None,
data: DataFrame | None = None,
eval_env: int = 1,
) -> DataFrame:
"""
Predict values for additional data
Parameters
----------
params : array_like
Model parameters (nvar by 1)
equations : dict
Dictionary-like structure containing exogenous and endogenous
variables. Each key is an equations label and must
match the labels used to fir the model. Each value must be a
dictionary with keys "exog" and "endog". If predictions are not
required for one of more of the model equations, these keys can
be omitted.
data : DataFrame
Values to use when making predictions from a model constructed
from a formula
eval_env : int
Depth to use when evaluating formulas.
Returns
-------
predictions : DataFrame
Fitted values from supplied data and parameters
Notes
-----
If `data` is not none, then `equations` must be none.
Predictions from models constructed using formulas can
be computed using either `equations`, which will treat these are
arrays of values corresponding to the formula-process data, or using
`data` which will be processed using the formula used to construct the
values corresponding to the original model specification.
When using `exog` and `endog`, the regressor array for a particular
equation is assembled as
`[equations[eqn]["exog"], equations[eqn]["endog"]]` where `eqn` is
an equation label. These must correspond to the columns in the
estimated model.
"""
if data is not None:
assert self.formula is not None
parser = SystemFormulaParser(
self.formula, data=data, context=capture_context(eval_env)
)
equations_d = parser.data
else:
if equations is None:
raise ValueError("One of equations or data must be provided.")
assert equations is not None
equations_d = {k: dict(v) for k, v in equations.items()}
params = np.atleast_2d(np.asarray(params))
if params.shape[0] == 1:
params = params.T
nx = int(sum(_x.shape[1] for _x in self._x))
if params.shape[0] != nx:
raise ValueError(
f"Parameters must have {nx} elements; found {params.shape[0]}."
)
loc = 0
out = dict()
for i, label in enumerate(self._eq_labels):
kx = self._x[i].shape[1]
if label in equations_d:
b = params[loc : loc + kx]
eqn = equations_d[label]
exog = eqn.get("exog", None)
endog = eqn.get("endog", None)
if exog is None and endog is None:
loc += kx
continue
if exog is not None:
exog_endog = IVData(exog).pandas
if endog is not None:
endog_ivd = IVData(endog)
exog_endog = concat([exog_endog, endog_ivd.pandas], axis=1)
else:
exog_endog = IVData(endog).pandas
fitted = np.asarray(exog_endog) @ b
fitted_df = DataFrame(fitted, index=exog_endog.index, columns=[label])
out[label] = fitted_df
loc += kx
out_df = reduce(
lambda left, right: left.merge(
right, how="outer", left_index=True, right_index=True
),
[out[key] for key in out],
)
return out_df
|
(self, params: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, equations: Optional[collections.abc.Mapping[str, collections.abc.Mapping[str, Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series]]]] = None, data: Optional[pandas.core.frame.DataFrame] = None, eval_env: int = 1) -> pandas.core.frame.DataFrame
|
42,905 |
linearmodels.system.model
|
reset_constraints
|
Remove all model constraints
|
def reset_constraints(self) -> None:
"""Remove all model constraints"""
self._constraints = None
|
(self) -> NoneType
|
42,906 |
linearmodels.iv.model
|
IVGMM
|
Estimation of IV models using the generalized method of moments (GMM)
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
endog : array_like
Endogenous regressors (nobs by nendog)
instruments : array_like
Instrumental variables (nobs by ninstr)
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
Available GMM weight functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity by not
autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
The estimator is defined as
.. math::
\hat{\beta}_{gmm}=(X'ZW^{-1}Z'X)^{-1}X'ZW^{-1}Z'Y
where :math:`W` is a positive definite weight matrix and :math:`Z`
contains both the exogenous regressors and the instruments.
.. todo::
* VCV: bootstrap
See Also
--------
IV2SLS, IVLIML, IVGMMCUE
|
class IVGMM(_IVGMMBase):
r"""
Estimation of IV models using the generalized method of moments (GMM)
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
endog : array_like
Endogenous regressors (nobs by nendog)
instruments : array_like
Instrumental variables (nobs by ninstr)
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
Available GMM weight functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity by not
autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
The estimator is defined as
.. math::
\hat{\beta}_{gmm}=(X'ZW^{-1}Z'X)^{-1}X'ZW^{-1}Z'Y
where :math:`W` is a positive definite weight matrix and :math:`Z`
contains both the exogenous regressors and the instruments.
.. todo::
* VCV: bootstrap
See Also
--------
IV2SLS, IVLIML, IVGMMCUE
"""
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike | None,
endog: IVDataLike | None,
instruments: IVDataLike | None,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
):
super().__init__(dependent, exog, endog, instruments, weights=weights)
self._method = "IV-GMM"
weight_matrix_estimator = WEIGHT_MATRICES[weight_type]
self._weight = weight_matrix_estimator(**weight_config)
self._weight_type = weight_type
self._weight_config = self._weight.config
@staticmethod
def from_formula(
formula: str,
data: DataFrame,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
) -> IVGMM:
"""
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using "1 +" if required.
Returns
-------
IVGMM
Model instance
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVGMM
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVGMM.from_formula(formula, data)
"""
mod = _gmm_model_from_formula(
IVGMM, formula, data, weights, weight_type, **weight_config
)
assert isinstance(mod, IVGMM)
return mod
@staticmethod
def estimate_parameters(
x: Float64Array, y: Float64Array, z: Float64Array, w: Float64Array
) -> Float64Array:
"""
Parameters
----------
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
w : ndarray
GMM weight matrix (ninstr by ninstr)
Returns
-------
ndarray
Estimated parameters (nvar by 1)
Notes
-----
Exposed as a static method to facilitate estimation with other data,
e.g., bootstrapped samples. Performs no error checking.
"""
xpz = x.T @ z
zpy = z.T @ y
return inv(xpz @ w @ xpz.T) @ (xpz @ w @ zpy)
def fit(
self,
*,
iter_limit: int = 2,
tol: float = 1e-4,
initial_weight: Float64Array | None = None,
cov_type: str = "robust",
debiased: bool = False,
**cov_config: Any,
) -> OLSResults | IVGMMResults:
"""
Estimate model parameters
Parameters
----------
iter_limit : int
Maximum number of iterations. Default is 2, which produces
two-step efficient GMM estimates. Larger values can be used
to iterate between parameter estimation and optimal weight
matrix estimation until convergence.
tol : float
Convergence criteria. Measured as covariance normalized change in
parameters across iterations where the covariance estimator is
based on the first step parameter estimates.
initial_weight : ndarray
Initial weighting matrix to use in the first step. If not
specified, uses the average outer-product of the set containing
the exogenous variables and instruments.
cov_type : str
Name of covariance estimator to use. Available covariance
functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity but
not autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional parameters to pass to covariance estimator. Supported
parameters depend on specific covariance structure assumed. See
:class:`linearmodels.iv.gmm.IVGMMCovariance` for details
on the available options. Defaults are used if no covariance
configuration is provided.
Returns
-------
IVGMMResults
Results container
See also
--------
linearmodels.iv.gmm.IVGMMCovariance
"""
wy, wx, wz = self._wy, self._wx, self._wz
nobs = wy.shape[0]
weight_matrix = self._weight.weight_matrix
k_wz = wz.shape[1]
if initial_weight is not None:
initial_weight = asarray(initial_weight)
if initial_weight.ndim != 2 or initial_weight.shape != (k_wz, k_wz):
raise ValueError(f"initial_weight must be a {k_wz} by {k_wz} array")
wmat = inv(wz.T @ wz / nobs) if initial_weight is None else initial_weight
_params = params = self.estimate_parameters(wx, wy, wz, wmat)
iters, norm = 1, 10 * tol + 1
vinv = eye(params.shape[0])
while iters < iter_limit and norm > tol:
eps = wy - wx @ params
wmat = inv(weight_matrix(wx, wz, eps))
params = self.estimate_parameters(wx, wy, wz, wmat)
delta = params - _params
if iters == 1:
xpz = wx.T @ wz / nobs
v = (xpz @ wmat @ xpz.T) / nobs
vinv = inv(v)
_params = params
norm = float(squeeze(delta.T @ vinv @ delta))
iters += 1
cov_config["debiased"] = debiased
cov_estimator = IVGMMCovariance(
wx, wy, wz, params, wmat, cov_type, **cov_config
)
results = self._post_estimation(params, cov_estimator, cov_type)
gmm_pe = self._gmm_post_estimation(params, wmat, iters)
results.update(gmm_pe)
return IVGMMResults(results, self)
def _gmm_post_estimation(
self, params: Float64Array, weight_mat: Float64Array, iters: int
) -> dict[str, Any]:
"""GMM-specific post-estimation results"""
instr = self._instr_columns
gmm_specific = {
"weight_mat": DataFrame(weight_mat, columns=instr, index=instr),
"weight_type": self._weight_type,
"weight_config": self._weight_type,
"iterations": iters,
"j_stat": self._j_statistic(params, weight_mat),
}
return gmm_specific
|
(dependent: 'IVDataLike', exog: 'IVDataLike | None', endog: 'IVDataLike | None', instruments: 'IVDataLike | None', *, weights: 'IVDataLike | None' = None, weight_type: 'str' = 'robust', **weight_config: 'Any')
|
42,907 |
linearmodels.iv.model
|
__init__
| null |
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike | None,
endog: IVDataLike | None,
instruments: IVDataLike | None,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
):
super().__init__(dependent, exog, endog, instruments, weights=weights)
self._method = "IV-GMM"
weight_matrix_estimator = WEIGHT_MATRICES[weight_type]
self._weight = weight_matrix_estimator(**weight_config)
self._weight_type = weight_type
self._weight_config = self._weight.config
|
(self, dependent: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], exog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], endog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], instruments: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, weight_type: str = 'robust', **weight_config: Any)
|
42,910 |
linearmodels.iv.model
|
_gmm_post_estimation
|
GMM-specific post-estimation results
|
def _gmm_post_estimation(
self, params: Float64Array, weight_mat: Float64Array, iters: int
) -> dict[str, Any]:
"""GMM-specific post-estimation results"""
instr = self._instr_columns
gmm_specific = {
"weight_mat": DataFrame(weight_mat, columns=instr, index=instr),
"weight_type": self._weight_type,
"weight_config": self._weight_type,
"iterations": iters,
"j_stat": self._j_statistic(params, weight_mat),
}
return gmm_specific
|
(self, params: numpy.ndarray, weight_mat: numpy.ndarray, iters: int) -> dict[str, typing.Any]
|
42,911 |
linearmodels.iv.model
|
_j_statistic
|
J stat and test
|
def _j_statistic(
self, params: Float64Array, weight_mat: Float64Array
) -> WaldTestStatistic:
"""J stat and test"""
y, x, z = self._wy, self._wx, self._wz
nobs, nvar, ninstr = y.shape[0], x.shape[1], z.shape[1]
eps = y - x @ params
g_bar = (z * eps).mean(0)
stat = float(nobs * g_bar.T @ weight_mat @ g_bar.T)
null = "Expected moment conditions are equal to 0"
return WaldTestStatistic(stat, null, ninstr - nvar)
|
(self, params: numpy.ndarray, weight_mat: numpy.ndarray) -> linearmodels.shared.hypotheses.WaldTestStatistic
|
42,914 |
linearmodels.iv.model
|
estimate_parameters
|
Parameters
----------
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
w : ndarray
GMM weight matrix (ninstr by ninstr)
Returns
-------
ndarray
Estimated parameters (nvar by 1)
Notes
-----
Exposed as a static method to facilitate estimation with other data,
e.g., bootstrapped samples. Performs no error checking.
|
@staticmethod
def estimate_parameters(
x: Float64Array, y: Float64Array, z: Float64Array, w: Float64Array
) -> Float64Array:
"""
Parameters
----------
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
w : ndarray
GMM weight matrix (ninstr by ninstr)
Returns
-------
ndarray
Estimated parameters (nvar by 1)
Notes
-----
Exposed as a static method to facilitate estimation with other data,
e.g., bootstrapped samples. Performs no error checking.
"""
xpz = x.T @ z
zpy = z.T @ y
return inv(xpz @ w @ xpz.T) @ (xpz @ w @ zpy)
|
(x: numpy.ndarray, y: numpy.ndarray, z: numpy.ndarray, w: numpy.ndarray) -> numpy.ndarray
|
42,915 |
linearmodels.iv.model
|
fit
|
Estimate model parameters
Parameters
----------
iter_limit : int
Maximum number of iterations. Default is 2, which produces
two-step efficient GMM estimates. Larger values can be used
to iterate between parameter estimation and optimal weight
matrix estimation until convergence.
tol : float
Convergence criteria. Measured as covariance normalized change in
parameters across iterations where the covariance estimator is
based on the first step parameter estimates.
initial_weight : ndarray
Initial weighting matrix to use in the first step. If not
specified, uses the average outer-product of the set containing
the exogenous variables and instruments.
cov_type : str
Name of covariance estimator to use. Available covariance
functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity but
not autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional parameters to pass to covariance estimator. Supported
parameters depend on specific covariance structure assumed. See
:class:`linearmodels.iv.gmm.IVGMMCovariance` for details
on the available options. Defaults are used if no covariance
configuration is provided.
Returns
-------
IVGMMResults
Results container
See also
--------
linearmodels.iv.gmm.IVGMMCovariance
|
def fit(
self,
*,
iter_limit: int = 2,
tol: float = 1e-4,
initial_weight: Float64Array | None = None,
cov_type: str = "robust",
debiased: bool = False,
**cov_config: Any,
) -> OLSResults | IVGMMResults:
"""
Estimate model parameters
Parameters
----------
iter_limit : int
Maximum number of iterations. Default is 2, which produces
two-step efficient GMM estimates. Larger values can be used
to iterate between parameter estimation and optimal weight
matrix estimation until convergence.
tol : float
Convergence criteria. Measured as covariance normalized change in
parameters across iterations where the covariance estimator is
based on the first step parameter estimates.
initial_weight : ndarray
Initial weighting matrix to use in the first step. If not
specified, uses the average outer-product of the set containing
the exogenous variables and instruments.
cov_type : str
Name of covariance estimator to use. Available covariance
functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity but
not autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional parameters to pass to covariance estimator. Supported
parameters depend on specific covariance structure assumed. See
:class:`linearmodels.iv.gmm.IVGMMCovariance` for details
on the available options. Defaults are used if no covariance
configuration is provided.
Returns
-------
IVGMMResults
Results container
See also
--------
linearmodels.iv.gmm.IVGMMCovariance
"""
wy, wx, wz = self._wy, self._wx, self._wz
nobs = wy.shape[0]
weight_matrix = self._weight.weight_matrix
k_wz = wz.shape[1]
if initial_weight is not None:
initial_weight = asarray(initial_weight)
if initial_weight.ndim != 2 or initial_weight.shape != (k_wz, k_wz):
raise ValueError(f"initial_weight must be a {k_wz} by {k_wz} array")
wmat = inv(wz.T @ wz / nobs) if initial_weight is None else initial_weight
_params = params = self.estimate_parameters(wx, wy, wz, wmat)
iters, norm = 1, 10 * tol + 1
vinv = eye(params.shape[0])
while iters < iter_limit and norm > tol:
eps = wy - wx @ params
wmat = inv(weight_matrix(wx, wz, eps))
params = self.estimate_parameters(wx, wy, wz, wmat)
delta = params - _params
if iters == 1:
xpz = wx.T @ wz / nobs
v = (xpz @ wmat @ xpz.T) / nobs
vinv = inv(v)
_params = params
norm = float(squeeze(delta.T @ vinv @ delta))
iters += 1
cov_config["debiased"] = debiased
cov_estimator = IVGMMCovariance(
wx, wy, wz, params, wmat, cov_type, **cov_config
)
results = self._post_estimation(params, cov_estimator, cov_type)
gmm_pe = self._gmm_post_estimation(params, wmat, iters)
results.update(gmm_pe)
return IVGMMResults(results, self)
|
(self, *, iter_limit: int = 2, tol: float = 0.0001, initial_weight: Optional[numpy.ndarray] = None, cov_type: str = 'robust', debiased: bool = False, **cov_config: Any) -> linearmodels.iv.results.OLSResults | linearmodels.iv.results.IVGMMResults
|
42,916 |
linearmodels.iv.model
|
from_formula
|
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using "1 +" if required.
Returns
-------
IVGMM
Model instance
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVGMM
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVGMM.from_formula(formula, data)
|
@staticmethod
def from_formula(
formula: str,
data: DataFrame,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
) -> IVGMM:
"""
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using "1 +" if required.
Returns
-------
IVGMM
Model instance
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVGMM
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVGMM.from_formula(formula, data)
"""
mod = _gmm_model_from_formula(
IVGMM, formula, data, weights, weight_type, **weight_config
)
assert isinstance(mod, IVGMM)
return mod
|
(formula: str, data: pandas.core.frame.DataFrame, *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, weight_type: str = 'robust', **weight_config: Any) -> linearmodels.iv.model.IVGMM
|
42,920 |
linearmodels.iv.model
|
IVGMMCUE
|
Estimation of IV models using continuously updating GMM
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
endog : array_like
Endogenous regressors (nobs by nendog)
instruments : array_like
Instrumental variables (nobs by ninstr)
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
Available weight functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity by not
autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
In most circumstances, the ``center`` weight option should be ``True`` to
avoid starting value dependence.
.. math::
\hat{\beta}_{cue} & =\min_{\beta}\bar{g}(\beta)'W(\beta)^{-1}g(\beta)\\
g(\beta) & =n^{-1}\sum_{i=1}^{n}z_{i}(y_{i}-x_{i}\beta)
where :math:`W(\beta)` is a weight matrix that depends on :math:`\beta`
through :math:`\epsilon_i = y_i - x_i\beta`.
See Also
--------
IV2SLS, IVLIML, IVGMM
|
class IVGMMCUE(_IVGMMBase):
r"""
Estimation of IV models using continuously updating GMM
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
endog : array_like
Endogenous regressors (nobs by nendog)
instruments : array_like
Instrumental variables (nobs by ninstr)
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
Available weight functions are:
* "unadjusted", "homoskedastic" - Assumes moment conditions are
homoskedastic
* "robust", "heteroskedastic" - Allows for heteroskedasticity by not
autocorrelation
* "kernel" - Allows for heteroskedasticity and autocorrelation
* "cluster" - Allows for one-way cluster dependence
In most circumstances, the ``center`` weight option should be ``True`` to
avoid starting value dependence.
.. math::
\hat{\beta}_{cue} & =\min_{\beta}\bar{g}(\beta)'W(\beta)^{-1}g(\beta)\\
g(\beta) & =n^{-1}\sum_{i=1}^{n}z_{i}(y_{i}-x_{i}\beta)
where :math:`W(\beta)` is a weight matrix that depends on :math:`\beta`
through :math:`\epsilon_i = y_i - x_i\beta`.
See Also
--------
IV2SLS, IVLIML, IVGMM
"""
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike | None,
endog: IVDataLike | None,
instruments: IVDataLike | None,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
) -> None:
self._method = "IV-GMM-CUE"
super().__init__(
dependent,
exog,
endog,
instruments,
weights=weights,
weight_type=weight_type,
**weight_config,
)
if "center" not in weight_config:
weight_config["center"] = True
@staticmethod
def from_formula(
formula: str,
data: DataFrame,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
) -> IVGMMCUE:
"""
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Returns
-------
IVGMMCUE
Model instance
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using "1 +" if required.
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVGMMCUE
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVGMMCUE.from_formula(formula, data)
"""
mod = _gmm_model_from_formula(
IVGMMCUE, formula, data, weights, weight_type, **weight_config
)
assert isinstance(mod, IVGMMCUE)
return mod
def j(
self, params: Float64Array, x: Float64Array, y: Float64Array, z: Float64Array
) -> float:
r"""
Optimization target
Parameters
----------
params : ndarray
Parameter vector (nvar)
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
Returns
-------
float
GMM objective function, also known as the J statistic
Notes
-----
The GMM objective function is defined as
.. math::
J(\beta) = \bar{g}(\beta)'W(\beta)^{-1}\bar{g}(\beta)
where :math:`\bar{g}(\beta)` is the average of the moment
conditions, :math:`z_i \hat{\epsilon}_i`, where
:math:`\hat{\epsilon}_i = y_i - x_i\beta`. The weighting matrix
is some estimator of the long-run variance of the moment conditions.
Unlike tradition GMM, the weighting matrix is simultaneously computed
with the moment conditions, and so has explicit dependence on
:math:`\beta`.
"""
nobs = y.shape[0]
weight_matrix = self._weight.weight_matrix
eps = y - x @ params[:, None]
w = inv(weight_matrix(x, z, eps))
g_bar = (z * eps).mean(0)
return nobs * float(g_bar.T @ w @ g_bar.T)
def estimate_parameters(
self,
starting: Float64Array,
x: Float64Array,
y: Float64Array,
z: Float64Array,
display: bool = False,
opt_options: dict[str, Any] | None = None,
) -> tuple[Float64Array, int]:
r"""
Parameters
----------
starting : ndarray
Starting values for the optimization
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
display : bool
Flag indicating whether to display iterative optimizer output
opt_options : dict
Dictionary containing additional keyword arguments to pass to
scipy.optimize.minimize.
Returns
-------
ndarray
Estimated parameters (nvar by 1)
Notes
-----
Exposed to facilitate estimation with other data, e.g., bootstrapped
samples. Performs no error checking.
See Also
--------
scipy.optimize.minimize
"""
args = (x, y, z)
if opt_options is None:
opt_options = {}
assert opt_options is not None
options = {"disp": display}
if "options" in opt_options:
opt_options = opt_options.copy()
options.update(opt_options.pop("options"))
res = minimize(self.j, starting, args=args, options=options, **opt_options)
return res.x[:, None], res.nit
def fit(
self,
*,
starting: Float64Array | Series | None = None,
display: bool = False,
cov_type: str = "robust",
debiased: bool = False,
opt_options: dict[str, Any] | None = None,
**cov_config: Any,
) -> OLSResults | IVGMMResults:
r"""
Estimate model parameters
Parameters
----------
starting : ndarray
Starting values to use in optimization. If not provided, 2SLS
estimates are used.
display : bool
Flag indicating whether to display optimization output
cov_type : str
Name of covariance estimator to use
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
opt_options : dict
Additional options to pass to scipy.optimize.minimize when
optimizing the objective function. If not provided, defers to
scipy to choose an appropriate optimizer. All minimize inputs
except ``fun``, ``x0``, and ``args`` can be overridden.
**cov_config
Additional parameters to pass to covariance estimator. Supported
parameters depend on specific covariance structure assumed. See
:class:`linearmodels.iv.gmm.IVGMMCovariance` for details
on the available options. Defaults are used if no covariance
configuration is provided.
Returns
-------
IVGMMResults
Results container
Notes
-----
Starting values are computed by IVGMM.
See also
--------
linearmodels.iv.gmm.IVGMMCovariance
"""
wy, wx, wz = self._wy, self._wx, self._wz
weight_matrix = self._weight.weight_matrix
if starting is None:
exog = None if self.exog.shape[1] == 0 else self.exog
endog = None if self.endog.shape[1] == 0 else self.endog
instr = None if self.instruments.shape[1] == 0 else self.instruments
res = IVGMM(
self.dependent,
exog,
endog,
instr,
weights=self.weights,
weight_type=self._weight_type,
**self._weight_config,
).fit()
starting = asarray(res.params)
else:
starting = asarray(starting)
if len(starting) != self.exog.shape[1] + self.endog.shape[1]:
raise ValueError(
"starting does not have the correct number " "of values"
)
params, iters = self.estimate_parameters(
starting, wx, wy, wz, display, opt_options=opt_options
)
eps = wy - wx @ params
wmat = inv(weight_matrix(wx, wz, eps))
cov_config["debiased"] = debiased
cov_estimator = IVGMMCovariance(
wx, wy, wz, params, wmat, cov_type, **cov_config
)
results = self._post_estimation(params, cov_estimator, cov_type)
gmm_pe = self._gmm_post_estimation(params, wmat, iters)
results.update(gmm_pe)
return IVGMMResults(results, self)
|
(dependent: 'IVDataLike', exog: 'IVDataLike | None', endog: 'IVDataLike | None', instruments: 'IVDataLike | None', *, weights: 'IVDataLike | None' = None, weight_type: 'str' = 'robust', **weight_config: 'Any') -> 'None'
|
42,921 |
linearmodels.iv.model
|
__init__
| null |
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike | None,
endog: IVDataLike | None,
instruments: IVDataLike | None,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
) -> None:
self._method = "IV-GMM-CUE"
super().__init__(
dependent,
exog,
endog,
instruments,
weights=weights,
weight_type=weight_type,
**weight_config,
)
if "center" not in weight_config:
weight_config["center"] = True
|
(self, dependent: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], exog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], endog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], instruments: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, weight_type: str = 'robust', **weight_config: Any) -> NoneType
|
42,928 |
linearmodels.iv.model
|
estimate_parameters
|
Parameters
----------
starting : ndarray
Starting values for the optimization
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
display : bool
Flag indicating whether to display iterative optimizer output
opt_options : dict
Dictionary containing additional keyword arguments to pass to
scipy.optimize.minimize.
Returns
-------
ndarray
Estimated parameters (nvar by 1)
Notes
-----
Exposed to facilitate estimation with other data, e.g., bootstrapped
samples. Performs no error checking.
See Also
--------
scipy.optimize.minimize
|
def estimate_parameters(
self,
starting: Float64Array,
x: Float64Array,
y: Float64Array,
z: Float64Array,
display: bool = False,
opt_options: dict[str, Any] | None = None,
) -> tuple[Float64Array, int]:
r"""
Parameters
----------
starting : ndarray
Starting values for the optimization
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
display : bool
Flag indicating whether to display iterative optimizer output
opt_options : dict
Dictionary containing additional keyword arguments to pass to
scipy.optimize.minimize.
Returns
-------
ndarray
Estimated parameters (nvar by 1)
Notes
-----
Exposed to facilitate estimation with other data, e.g., bootstrapped
samples. Performs no error checking.
See Also
--------
scipy.optimize.minimize
"""
args = (x, y, z)
if opt_options is None:
opt_options = {}
assert opt_options is not None
options = {"disp": display}
if "options" in opt_options:
opt_options = opt_options.copy()
options.update(opt_options.pop("options"))
res = minimize(self.j, starting, args=args, options=options, **opt_options)
return res.x[:, None], res.nit
|
(self, starting: numpy.ndarray, x: numpy.ndarray, y: numpy.ndarray, z: numpy.ndarray, display: bool = False, opt_options: Optional[dict[str, Any]] = None) -> tuple[numpy.ndarray, int]
|
42,929 |
linearmodels.iv.model
|
fit
|
Estimate model parameters
Parameters
----------
starting : ndarray
Starting values to use in optimization. If not provided, 2SLS
estimates are used.
display : bool
Flag indicating whether to display optimization output
cov_type : str
Name of covariance estimator to use
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
opt_options : dict
Additional options to pass to scipy.optimize.minimize when
optimizing the objective function. If not provided, defers to
scipy to choose an appropriate optimizer. All minimize inputs
except ``fun``, ``x0``, and ``args`` can be overridden.
**cov_config
Additional parameters to pass to covariance estimator. Supported
parameters depend on specific covariance structure assumed. See
:class:`linearmodels.iv.gmm.IVGMMCovariance` for details
on the available options. Defaults are used if no covariance
configuration is provided.
Returns
-------
IVGMMResults
Results container
Notes
-----
Starting values are computed by IVGMM.
See also
--------
linearmodels.iv.gmm.IVGMMCovariance
|
def fit(
self,
*,
starting: Float64Array | Series | None = None,
display: bool = False,
cov_type: str = "robust",
debiased: bool = False,
opt_options: dict[str, Any] | None = None,
**cov_config: Any,
) -> OLSResults | IVGMMResults:
r"""
Estimate model parameters
Parameters
----------
starting : ndarray
Starting values to use in optimization. If not provided, 2SLS
estimates are used.
display : bool
Flag indicating whether to display optimization output
cov_type : str
Name of covariance estimator to use
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
opt_options : dict
Additional options to pass to scipy.optimize.minimize when
optimizing the objective function. If not provided, defers to
scipy to choose an appropriate optimizer. All minimize inputs
except ``fun``, ``x0``, and ``args`` can be overridden.
**cov_config
Additional parameters to pass to covariance estimator. Supported
parameters depend on specific covariance structure assumed. See
:class:`linearmodels.iv.gmm.IVGMMCovariance` for details
on the available options. Defaults are used if no covariance
configuration is provided.
Returns
-------
IVGMMResults
Results container
Notes
-----
Starting values are computed by IVGMM.
See also
--------
linearmodels.iv.gmm.IVGMMCovariance
"""
wy, wx, wz = self._wy, self._wx, self._wz
weight_matrix = self._weight.weight_matrix
if starting is None:
exog = None if self.exog.shape[1] == 0 else self.exog
endog = None if self.endog.shape[1] == 0 else self.endog
instr = None if self.instruments.shape[1] == 0 else self.instruments
res = IVGMM(
self.dependent,
exog,
endog,
instr,
weights=self.weights,
weight_type=self._weight_type,
**self._weight_config,
).fit()
starting = asarray(res.params)
else:
starting = asarray(starting)
if len(starting) != self.exog.shape[1] + self.endog.shape[1]:
raise ValueError(
"starting does not have the correct number " "of values"
)
params, iters = self.estimate_parameters(
starting, wx, wy, wz, display, opt_options=opt_options
)
eps = wy - wx @ params
wmat = inv(weight_matrix(wx, wz, eps))
cov_config["debiased"] = debiased
cov_estimator = IVGMMCovariance(
wx, wy, wz, params, wmat, cov_type, **cov_config
)
results = self._post_estimation(params, cov_estimator, cov_type)
gmm_pe = self._gmm_post_estimation(params, wmat, iters)
results.update(gmm_pe)
return IVGMMResults(results, self)
|
(self, *, starting: Union[numpy.ndarray, pandas.core.series.Series, NoneType] = None, display: bool = False, cov_type: str = 'robust', debiased: bool = False, opt_options: Optional[dict[str, Any]] = None, **cov_config: Any) -> linearmodels.iv.results.OLSResults | linearmodels.iv.results.IVGMMResults
|
42,930 |
linearmodels.iv.model
|
from_formula
|
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Returns
-------
IVGMMCUE
Model instance
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using "1 +" if required.
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVGMMCUE
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVGMMCUE.from_formula(formula, data)
|
@staticmethod
def from_formula(
formula: str,
data: DataFrame,
*,
weights: IVDataLike | None = None,
weight_type: str = "robust",
**weight_config: Any,
) -> IVGMMCUE:
"""
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Returns
-------
IVGMMCUE
Model instance
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using "1 +" if required.
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVGMMCUE
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVGMMCUE.from_formula(formula, data)
"""
mod = _gmm_model_from_formula(
IVGMMCUE, formula, data, weights, weight_type, **weight_config
)
assert isinstance(mod, IVGMMCUE)
return mod
|
(formula: str, data: pandas.core.frame.DataFrame, *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, weight_type: str = 'robust', **weight_config: Any) -> linearmodels.iv.model.IVGMMCUE
|
42,931 |
linearmodels.iv.model
|
j
|
Optimization target
Parameters
----------
params : ndarray
Parameter vector (nvar)
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
Returns
-------
float
GMM objective function, also known as the J statistic
Notes
-----
The GMM objective function is defined as
.. math::
J(\beta) = \bar{g}(\beta)'W(\beta)^{-1}\bar{g}(\beta)
where :math:`\bar{g}(\beta)` is the average of the moment
conditions, :math:`z_i \hat{\epsilon}_i`, where
:math:`\hat{\epsilon}_i = y_i - x_i\beta`. The weighting matrix
is some estimator of the long-run variance of the moment conditions.
Unlike tradition GMM, the weighting matrix is simultaneously computed
with the moment conditions, and so has explicit dependence on
:math:`\beta`.
|
def j(
self, params: Float64Array, x: Float64Array, y: Float64Array, z: Float64Array
) -> float:
r"""
Optimization target
Parameters
----------
params : ndarray
Parameter vector (nvar)
x : ndarray
Regressor matrix (nobs by nvar)
y : ndarray
Regressand matrix (nobs by 1)
z : ndarray
Instrument matrix (nobs by ninstr)
Returns
-------
float
GMM objective function, also known as the J statistic
Notes
-----
The GMM objective function is defined as
.. math::
J(\beta) = \bar{g}(\beta)'W(\beta)^{-1}\bar{g}(\beta)
where :math:`\bar{g}(\beta)` is the average of the moment
conditions, :math:`z_i \hat{\epsilon}_i`, where
:math:`\hat{\epsilon}_i = y_i - x_i\beta`. The weighting matrix
is some estimator of the long-run variance of the moment conditions.
Unlike tradition GMM, the weighting matrix is simultaneously computed
with the moment conditions, and so has explicit dependence on
:math:`\beta`.
"""
nobs = y.shape[0]
weight_matrix = self._weight.weight_matrix
eps = y - x @ params[:, None]
w = inv(weight_matrix(x, z, eps))
g_bar = (z * eps).mean(0)
return nobs * float(g_bar.T @ w @ g_bar.T)
|
(self, params: numpy.ndarray, x: numpy.ndarray, y: numpy.ndarray, z: numpy.ndarray) -> float
|
42,935 |
linearmodels.iv.model
|
IVLIML
|
Limited information ML and k-class estimation of IV models
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
endog : array_like
Endogenous regressors (nobs by nendog)
instruments : array_like
Instrumental variables (nobs by ninstr)
weights : array_like
Observation weights used in estimation
fuller : float
Fuller's alpha to modify LIML estimator. Default returns unmodified
LIML estimator.
kappa : float
Parameter value for k-class estimation. If None, computed to
produce LIML parameter estimate.
Notes
-----
``kappa`` and ``fuller`` should not be used simultaneously since Fuller's
alpha applies an adjustment to ``kappa``, and so the same result can be
computed using only ``kappa``. Fuller's alpha is used to adjust the
LIML estimate of :math:`\kappa`, which is computed whenever ``kappa``
is not provided.
The LIML estimator is defined as
.. math::
\hat{\beta}_{\kappa} & =(X(I-\kappa M_{z})X)^{-1}X(I-\kappa M_{z})Y\\
M_{z} & =I-P_{z}\\
P_{z} & =Z(Z'Z)^{-1}Z'
where :math:`Z` contains both the exogenous regressors and the instruments.
:math:`\kappa` is estimated as part of the LIML estimator.
When using Fuller's :math:`\alpha`, the value used is modified to
.. math::
\kappa-\alpha/(n-n_{instr})
.. todo::
* VCV: bootstrap
See Also
--------
IV2SLS, IVGMM, IVGMMCUE
|
class IVLIML(_IVLSModelBase):
r"""
Limited information ML and k-class estimation of IV models
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
endog : array_like
Endogenous regressors (nobs by nendog)
instruments : array_like
Instrumental variables (nobs by ninstr)
weights : array_like
Observation weights used in estimation
fuller : float
Fuller's alpha to modify LIML estimator. Default returns unmodified
LIML estimator.
kappa : float
Parameter value for k-class estimation. If None, computed to
produce LIML parameter estimate.
Notes
-----
``kappa`` and ``fuller`` should not be used simultaneously since Fuller's
alpha applies an adjustment to ``kappa``, and so the same result can be
computed using only ``kappa``. Fuller's alpha is used to adjust the
LIML estimate of :math:`\kappa`, which is computed whenever ``kappa``
is not provided.
The LIML estimator is defined as
.. math::
\hat{\beta}_{\kappa} & =(X(I-\kappa M_{z})X)^{-1}X(I-\kappa M_{z})Y\\
M_{z} & =I-P_{z}\\
P_{z} & =Z(Z'Z)^{-1}Z'
where :math:`Z` contains both the exogenous regressors and the instruments.
:math:`\kappa` is estimated as part of the LIML estimator.
When using Fuller's :math:`\alpha`, the value used is modified to
.. math::
\kappa-\alpha/(n-n_{instr})
.. todo::
* VCV: bootstrap
See Also
--------
IV2SLS, IVGMM, IVGMMCUE
"""
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike | None,
endog: IVDataLike | None,
instruments: IVDataLike | None,
*,
weights: IVDataLike | None = None,
fuller: Numeric = 0,
kappa: OptionalNumeric = None,
):
super().__init__(
dependent,
exog,
endog,
instruments,
weights=weights,
fuller=fuller,
kappa=kappa,
)
@staticmethod
def from_formula(
formula: str,
data: DataFrame,
*,
weights: IVDataLike | None = None,
fuller: float = 0,
kappa: OptionalNumeric = None,
) -> IVLIML:
"""
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
fuller : float
Fuller's alpha to modify LIML estimator. Default returns unmodified
LIML estimator.
kappa : float
Parameter value for k-class estimation. If not provided, computed to
produce LIML parameter estimate.
Returns
-------
IVLIML
Model instance
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using '1 +' if required.
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVLIML
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVLIML.from_formula(formula, data)
"""
parser = IVFormulaParser(formula, data)
dep, exog, endog, instr = parser.data
mod: IVLIML = IVLIML(
dep, exog, endog, instr, weights=weights, fuller=fuller, kappa=kappa
)
mod.formula = formula
return mod
|
(dependent: 'IVDataLike', exog: 'IVDataLike | None', endog: 'IVDataLike | None', instruments: 'IVDataLike | None', *, weights: 'IVDataLike | None' = None, fuller: 'Numeric' = 0, kappa: 'OptionalNumeric' = None)
|
42,936 |
linearmodels.iv.model
|
__init__
| null |
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike | None,
endog: IVDataLike | None,
instruments: IVDataLike | None,
*,
weights: IVDataLike | None = None,
fuller: Numeric = 0,
kappa: OptionalNumeric = None,
):
super().__init__(
dependent,
exog,
endog,
instruments,
weights=weights,
fuller=fuller,
kappa=kappa,
)
|
(self, dependent: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], exog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], endog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], instruments: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType], *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, fuller: Union[int, float] = 0, kappa: Union[int, float, NoneType] = None)
|
42,944 |
linearmodels.iv.model
|
from_formula
|
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
fuller : float
Fuller's alpha to modify LIML estimator. Default returns unmodified
LIML estimator.
kappa : float
Parameter value for k-class estimation. If not provided, computed to
produce LIML parameter estimate.
Returns
-------
IVLIML
Model instance
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using '1 +' if required.
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVLIML
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVLIML.from_formula(formula, data)
|
@staticmethod
def from_formula(
formula: str,
data: DataFrame,
*,
weights: IVDataLike | None = None,
fuller: float = 0,
kappa: OptionalNumeric = None,
) -> IVLIML:
"""
Parameters
----------
formula : str
Formula modified for the IV syntax described in the notes
section
data : DataFrame
DataFrame containing the variables used in the formula
weights : array_like
Observation weights used in estimation
fuller : float
Fuller's alpha to modify LIML estimator. Default returns unmodified
LIML estimator.
kappa : float
Parameter value for k-class estimation. If not provided, computed to
produce LIML parameter estimate.
Returns
-------
IVLIML
Model instance
Notes
-----
The IV formula modifies the standard formula syntax to include a
block of the form [endog ~ instruments] which is used to indicate
the list of endogenous variables and instruments. The general
structure is `dependent ~ exog [endog ~ instruments]` and it must
be the case that the formula expressions constructed from blocks
`dependent ~ exog endog` and `dependent ~ exog instruments` are both
valid formulas.
A constant must be explicitly included using '1 +' if required.
Examples
--------
>>> import numpy as np
>>> from linearmodels.datasets import wage
>>> from linearmodels.iv import IVLIML
>>> data = wage.load()
>>> formula = "np.log(wage) ~ 1 + exper + exper ** 2 + brthord + [educ ~ sibs]"
>>> mod = IVLIML.from_formula(formula, data)
"""
parser = IVFormulaParser(formula, data)
dep, exog, endog, instr = parser.data
mod: IVLIML = IVLIML(
dep, exog, endog, instr, weights=weights, fuller=fuller, kappa=kappa
)
mod.formula = formula
return mod
|
(formula: str, data: pandas.core.frame.DataFrame, *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, fuller: float = 0, kappa: Union[int, float, NoneType] = None) -> linearmodels.iv.model.IVLIML
|
42,948 |
linearmodels.system.model
|
IVSystemGMM
|
System Generalized Method of Moments (GMM) estimation of linear IV models
Parameters
----------
equations : dict
Dictionary-like structure containing dependent, exogenous, endogenous
and instrumental variables. Each key is an equations label and must
be a string. Each value must be either a tuple of the form (dependent,
exog, endog, instrument[, weights]) or a dictionary with keys "dependent",
"exog". The dictionary may contain optional keys for "endog",
"instruments", and "weights". Endogenous and/or Instrument can be empty
if all variables in an equation are exogenous.
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If not
provided, FGLS is implemented based on an estimate of sigma. Only used
if weight_type is "unadjusted"
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
Estimates a linear model using GMM. Each equation is of the form
.. math::
y_{i,k} = x_{i,k}\beta_i + \epsilon_{i,k}
where k denotes the equation and i denoted the observation index. By
stacking vertically arrays of dependent and placing the exogenous
variables into a block diagonal array, the entire system can be compactly
expressed as
.. math::
Y = X\beta + \epsilon
where
.. math::
Y = \left[\begin{array}{x}Y_1 \\ Y_2 \\ \vdots \\ Y_K\end{array}\right]
and
.. math::
X = \left[\begin{array}{cccc}
X_1 & 0 & \ldots & 0 \\
0 & X_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & X_K
\end{array}\right]
The system GMM estimator uses the moment condition
.. math::
z_{ij}(y_{ij} - x_{ij}\beta_j) = 0
where j indexes the equation. The estimator for the coefficients is given
by
.. math::
\hat{\beta}_{GMM} & = (X'ZW^{-1}Z'X)^{-1}X'ZW^{-1}Z'Y \\
where :math:`W` is a positive definite weighting matrix.
|
class IVSystemGMM(_SystemModelBase):
r"""
System Generalized Method of Moments (GMM) estimation of linear IV models
Parameters
----------
equations : dict
Dictionary-like structure containing dependent, exogenous, endogenous
and instrumental variables. Each key is an equations label and must
be a string. Each value must be either a tuple of the form (dependent,
exog, endog, instrument[, weights]) or a dictionary with keys "dependent",
"exog". The dictionary may contain optional keys for "endog",
"instruments", and "weights". Endogenous and/or Instrument can be empty
if all variables in an equation are exogenous.
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If not
provided, FGLS is implemented based on an estimate of sigma. Only used
if weight_type is "unadjusted"
weight_type : str
Name of moment condition weight function to use in the GMM estimation
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Notes
-----
Estimates a linear model using GMM. Each equation is of the form
.. math::
y_{i,k} = x_{i,k}\beta_i + \epsilon_{i,k}
where k denotes the equation and i denoted the observation index. By
stacking vertically arrays of dependent and placing the exogenous
variables into a block diagonal array, the entire system can be compactly
expressed as
.. math::
Y = X\beta + \epsilon
where
.. math::
Y = \left[\begin{array}{x}Y_1 \\ Y_2 \\ \vdots \\ Y_K\end{array}\right]
and
.. math::
X = \left[\begin{array}{cccc}
X_1 & 0 & \ldots & 0 \\
0 & X_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & X_K
\end{array}\right]
The system GMM estimator uses the moment condition
.. math::
z_{ij}(y_{ij} - x_{ij}\beta_j) = 0
where j indexes the equation. The estimator for the coefficients is given
by
.. math::
\hat{\beta}_{GMM} & = (X'ZW^{-1}Z'X)^{-1}X'ZW^{-1}Z'Y \\
where :math:`W` is a positive definite weighting matrix.
"""
def __init__(
self,
equations: Mapping[
str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]
],
*,
sigma: ArrayLike | None = None,
weight_type: str = "robust",
**weight_config: bool | str | float,
) -> None:
super().__init__(equations, sigma=sigma)
self._weight_type = weight_type
self._weight_config = weight_config
if weight_type not in COV_TYPES:
raise ValueError("Unknown estimator for weight_type")
if weight_type not in ("unadjusted", "homoskedastic") and sigma is not None:
warnings.warn(
"sigma has been provided but the estimated weight "
"matrix not unadjusted (homoskedastic). sigma will "
"be ignored.",
UserWarning,
)
weight_type = COV_TYPES[weight_type]
self._weight_est = GMM_W_EST[weight_type](**weight_config)
def fit(
self,
*,
iter_limit: int = 2,
tol: float = 1e-6,
initial_weight: Float64Array | None = None,
cov_type: str = "robust",
**cov_config: bool | float,
) -> GMMSystemResults:
"""
Estimate model parameters
Parameters
----------
iter_limit : int
Maximum number of iterations for iterative GLS
tol : float
Tolerance to use when checking for convergence in iterative GLS
initial_weight : ndarray
Initial weighting matrix to use in the first step. If not
specified, uses the average outer-product of the set containing
the exogenous variables and instruments.
cov_type : str
Name of covariance estimator. Valid options are
* "unadjusted", "homoskedastic" - Classic covariance estimator
* "robust", "heteroskedastic" - Heteroskedasticity robust
covariance estimator
**cov_config
Additional parameters to pass to covariance estimator. All
estimators support debiased which employs a small-sample adjustment
Returns
-------
GMMSystemResults
Estimation results
"""
if cov_type not in COV_TYPES:
raise ValueError(f"Unknown cov_type: {cov_type}")
# Parameter estimation
wx, wy, wz = self._wx, self._wy, self._wz
k = len(wx)
nobs = wx[0].shape[0]
k_total = sum(map(lambda a: a.shape[1], wz))
if initial_weight is None:
w = blocked_inner_prod(wz, np.eye(k_total)) / nobs
else:
w = initial_weight
assert w is not None
beta_last = beta = self._blocked_gmm(
wx, wy, wz, w=cast(Float64Array, w), constraints=self.constraints
)
_eps = []
loc = 0
for i in range(k):
nb = wx[i].shape[1]
b = beta[loc : loc + nb]
_eps.append(wy[i] - wx[i] @ b)
loc += nb
eps = np.hstack(_eps)
sigma = self._weight_est.sigma(eps, wx) if self._sigma is None else self._sigma
vinv = None
iters = 1
norm = 10 * tol + 1
while iters < iter_limit and norm > tol:
sigma = (
self._weight_est.sigma(eps, wx) if self._sigma is None else self._sigma
)
w = self._weight_est.weight_matrix(wx, wz, eps, sigma=sigma)
beta = self._blocked_gmm(
wx, wy, wz, w=cast(Float64Array, w), constraints=self.constraints
)
delta = beta_last - beta
if vinv is None:
winv = np.linalg.inv(w)
xpz = blocked_cross_prod(wx, wz, np.eye(k))
xpz = cast(Float64Array, xpz / nobs)
v = (xpz @ winv @ xpz.T) / nobs
vinv = inv(v)
norm = float(np.squeeze(delta.T @ vinv @ delta))
beta_last = beta
_eps = []
loc = 0
for i in range(k):
nb = wx[i].shape[1]
b = beta[loc : loc + nb]
_eps.append(wy[i] - wx[i] @ b)
loc += nb
eps = np.hstack(_eps)
iters += 1
cov_type = COV_TYPES[cov_type]
cov_est = GMM_COV_EST[cov_type]
cov = cov_est(
wx, wz, eps, w, sigma=sigma, constraints=self._constraints, **cov_config
)
weps = eps
_eps = []
loc = 0
x, y = self._x, self._y
for i in range(k):
nb = x[i].shape[1]
b = beta[loc : loc + nb]
_eps.append(y[i] - x[i] @ b)
loc += nb
eps = np.hstack(_eps)
iters += 1
return self._finalize_results(
beta,
cov.cov,
weps,
eps,
cast(np.ndarray, w),
sigma,
iters - 1,
cov_type,
cov_config,
cov,
)
@staticmethod
def _blocked_gmm(
x: ArraySequence,
y: ArraySequence,
z: ArraySequence,
*,
w: Float64Array,
constraints: LinearConstraint | None = None,
) -> Float64Array:
k = len(x)
xpz = blocked_cross_prod(x, z, np.eye(k))
wi = np.linalg.inv(w)
xpz_wi_zpx = xpz @ wi @ xpz.T
zpy_arrs = []
for i in range(k):
zpy_arrs.append(z[i].T @ y[i])
zpy = np.vstack(zpy_arrs)
xpz_wi_zpy = xpz @ wi @ zpy
params = _parameters_from_xprod(xpz_wi_zpx, xpz_wi_zpy, constraints=constraints)
return params
def _finalize_results(
self,
beta: Float64Array,
cov: Float64Array,
weps: Float64Array,
eps: Float64Array,
wmat: Float64Array,
sigma: Float64Array,
iter_count: int,
cov_type: str,
cov_config: dict[str, bool | float],
cov_est: GMMHeteroskedasticCovariance | GMMHomoskedasticCovariance,
) -> GMMSystemResults:
"""Collect results to return after GLS estimation"""
k = len(self._wy)
# Repackage results for individual equations
individual = AttrDict()
debiased = bool(cov_config.get("debiased", False))
method = f"{iter_count}-Step System GMM"
if iter_count > 2:
method = "Iterative System GMM"
for i in range(k):
cons = bool(self.has_constant.iloc[i])
if cons:
c = np.sqrt(self._w[i])
ye = self._wy[i] - c @ lstsq(c, self._wy[i], rcond=None)[0]
else:
ye = self._wy[i]
total_ss = float(np.squeeze(ye.T @ ye))
stats = self._common_indiv_results(
i,
beta,
cov,
weps,
eps,
method,
cov_type,
cov_est,
iter_count,
debiased,
cons,
total_ss,
weight_est=self._weight_est,
)
key = self._eq_labels[i]
individual[key] = stats
# Populate results dictionary
nobs = eps.size
results = self._common_results(
beta, cov, method, iter_count, nobs, cov_type, sigma, individual, debiased
)
# wresid is different between GLS and OLS
wresiduals = []
for individual_key in individual:
wresiduals.append(individual[individual_key].wresid)
wresid = np.hstack(wresiduals)
results["wresid"] = wresid
results["wmat"] = wmat
results["weight_type"] = self._weight_type
results["weight_config"] = self._weight_est.config
results["cov_estimator"] = cov_est
results["cov_config"] = cov_est.cov_config
results["weight_estimator"] = self._weight_est
results["j_stat"] = self._j_statistic(beta, wmat)
r2s = [individual[eq].r2 for eq in individual]
results["system_r2"] = self._system_r2(eps, sigma, "gls", False, debiased, r2s)
return GMMSystemResults(results)
@classmethod
def from_formula(
cls,
formula: str | dict[str, str],
data: DataFrame,
*,
weights: dict[str, ArrayLike] | None = None,
weight_type: str = "robust",
**weight_config: bool | str | float,
) -> IVSystemGMM:
"""
Specify a 3SLS using the formula interface
Parameters
----------
formula : {str, dict-like}
Either a string or a dictionary of strings where each value in
the dictionary represents a single equation. See Notes for a
description of the accepted syntax
data : DataFrame
Frame containing named variables
weights : dict-like
Dictionary like object (e.g. a DataFrame) containing variable
weights. Each entry must have the same number of observations as
data. If an equation label is not a key weights, the weights will
be set to unity
weight_type : str
Name of moment condition weight function to use in the GMM
estimation. Valid options are:
* "unadjusted", "homoskedastic" - Assume moments are homoskedastic
* "robust", "heteroskedastic" - Allow for heteroskedasticity
**weight_config
Additional keyword arguments to pass to the moment condition weight
function
Returns
-------
model : IVSystemGMM
Model instance
Notes
-----
Models can be specified in one of two ways. The first uses curly
braces to encapsulate equations. The second uses a dictionary
where each key is an equation name.
Examples
--------
The simplest format uses standard formulas for each equation
in a dictionary. Best practice is to use an Ordered Dictionary
>>> import pandas as pd
>>> import numpy as np
>>> cols = ["y1", "x1_1", "x1_2", "z1", "y2", "x2_1", "x2_2", "z2"]
>>> data = pd.DataFrame(np.random.randn(500, 8), columns=cols)
>>> from linearmodels.system import IVSystemGMM
>>> formula = {"eq1": "y1 ~ 1 + x1_1 + [x1_2 ~ z1]",
... "eq2": "y2 ~ 1 + x2_1 + [x2_2 ~ z2]"}
>>> mod = IVSystemGMM.from_formula(formula, data)
The second format uses curly braces {} to surround distinct equations
>>> formula = "{y1 ~ 1 + x1_1 + [x1_2 ~ z1]} {y2 ~ 1 + x2_1 + [x2_2 ~ z2]}"
>>> mod = IVSystemGMM.from_formula(formula, data)
It is also possible to include equation labels when using curly braces
>>> formula = "{eq1: y1 ~ x1_1 + [x1_2 ~ z1]} {eq2: y2 ~ 1 + [x2_2 ~ z2]}"
>>> mod = IVSystemGMM.from_formula(formula, data)
"""
context = capture_context(1)
parser = SystemFormulaParser(formula, data, weights, context=context)
eqns = parser.data
mod = cls(eqns, sigma=None, weight_type=weight_type, **weight_config)
mod.formula = formula
return mod
def _j_statistic(
self, params: Float64Array, weight_mat: Float64Array
) -> WaldTestStatistic:
"""
J stat and test
Parameters
----------
params : ndarray
Estimated model parameters
weight_mat : ndarray
Weighting matrix used in estimation of the parameters
Returns
-------
stat : WaldTestStatistic
Test statistic
Notes
-----
Assumes that the efficient weighting matrix has been used. Using
other weighting matrices will not produce the correct test.
"""
y, x, z = self._wy, self._wx, self._wz
k = len(x)
ze_lst = []
idx = 0
for i in range(k):
kx = x[i].shape[1]
beta = params[idx : idx + kx]
eps = y[i] - x[i] @ beta
ze_lst.append(z[i] * eps)
idx += kx
ze = np.concatenate(ze_lst, 1)
g_bar = ze.mean(0)
nobs = x[0].shape[0]
stat = float(nobs * g_bar.T @ np.linalg.inv(weight_mat) @ g_bar.T)
null = "Expected moment conditions are equal to 0"
ninstr = sum(map(lambda a: a.shape[1], z))
nvar = sum(map(lambda a: a.shape[1], x))
ncons = 0 if self.constraints is None else self.constraints.r.shape[0]
return WaldTestStatistic(stat, null, ninstr - (nvar - ncons))
|
(equations: 'Mapping[str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]]', *, sigma: 'ArrayLike | None' = None, weight_type: 'str' = 'robust', **weight_config: 'bool | str | float') -> 'None'
|
42,949 |
linearmodels.system.model
|
__init__
| null |
def __init__(
self,
equations: Mapping[
str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]
],
*,
sigma: ArrayLike | None = None,
weight_type: str = "robust",
**weight_config: bool | str | float,
) -> None:
super().__init__(equations, sigma=sigma)
self._weight_type = weight_type
self._weight_config = weight_config
if weight_type not in COV_TYPES:
raise ValueError("Unknown estimator for weight_type")
if weight_type not in ("unadjusted", "homoskedastic") and sigma is not None:
warnings.warn(
"sigma has been provided but the estimated weight "
"matrix not unadjusted (homoskedastic). sigma will "
"be ignored.",
UserWarning,
)
weight_type = COV_TYPES[weight_type]
self._weight_est = GMM_W_EST[weight_type](**weight_config)
|
(self, equations: collections.abc.Mapping[str, collections.abc.Mapping[str, typing.Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]] | collections.abc.Sequence[typing.Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]]], *, sigma: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, weight_type: str = 'robust', **weight_config: bool | str | float) -> NoneType
|
42,952 |
linearmodels.system.model
|
_blocked_gmm
| null |
@staticmethod
def _blocked_gmm(
x: ArraySequence,
y: ArraySequence,
z: ArraySequence,
*,
w: Float64Array,
constraints: LinearConstraint | None = None,
) -> Float64Array:
k = len(x)
xpz = blocked_cross_prod(x, z, np.eye(k))
wi = np.linalg.inv(w)
xpz_wi_zpx = xpz @ wi @ xpz.T
zpy_arrs = []
for i in range(k):
zpy_arrs.append(z[i].T @ y[i])
zpy = np.vstack(zpy_arrs)
xpz_wi_zpy = xpz @ wi @ zpy
params = _parameters_from_xprod(xpz_wi_zpx, xpz_wi_zpy, constraints=constraints)
return params
|
(x: collections.abc.Sequence[numpy.ndarray], y: collections.abc.Sequence[numpy.ndarray], z: collections.abc.Sequence[numpy.ndarray], *, w: numpy.ndarray, constraints: Optional[linearmodels.system._utility.LinearConstraint] = None) -> numpy.ndarray
|
42,958 |
linearmodels.system.model
|
_finalize_results
|
Collect results to return after GLS estimation
|
def _finalize_results(
self,
beta: Float64Array,
cov: Float64Array,
weps: Float64Array,
eps: Float64Array,
wmat: Float64Array,
sigma: Float64Array,
iter_count: int,
cov_type: str,
cov_config: dict[str, bool | float],
cov_est: GMMHeteroskedasticCovariance | GMMHomoskedasticCovariance,
) -> GMMSystemResults:
"""Collect results to return after GLS estimation"""
k = len(self._wy)
# Repackage results for individual equations
individual = AttrDict()
debiased = bool(cov_config.get("debiased", False))
method = f"{iter_count}-Step System GMM"
if iter_count > 2:
method = "Iterative System GMM"
for i in range(k):
cons = bool(self.has_constant.iloc[i])
if cons:
c = np.sqrt(self._w[i])
ye = self._wy[i] - c @ lstsq(c, self._wy[i], rcond=None)[0]
else:
ye = self._wy[i]
total_ss = float(np.squeeze(ye.T @ ye))
stats = self._common_indiv_results(
i,
beta,
cov,
weps,
eps,
method,
cov_type,
cov_est,
iter_count,
debiased,
cons,
total_ss,
weight_est=self._weight_est,
)
key = self._eq_labels[i]
individual[key] = stats
# Populate results dictionary
nobs = eps.size
results = self._common_results(
beta, cov, method, iter_count, nobs, cov_type, sigma, individual, debiased
)
# wresid is different between GLS and OLS
wresiduals = []
for individual_key in individual:
wresiduals.append(individual[individual_key].wresid)
wresid = np.hstack(wresiduals)
results["wresid"] = wresid
results["wmat"] = wmat
results["weight_type"] = self._weight_type
results["weight_config"] = self._weight_est.config
results["cov_estimator"] = cov_est
results["cov_config"] = cov_est.cov_config
results["weight_estimator"] = self._weight_est
results["j_stat"] = self._j_statistic(beta, wmat)
r2s = [individual[eq].r2 for eq in individual]
results["system_r2"] = self._system_r2(eps, sigma, "gls", False, debiased, r2s)
return GMMSystemResults(results)
|
(self, beta: numpy.ndarray, cov: numpy.ndarray, weps: numpy.ndarray, eps: numpy.ndarray, wmat: numpy.ndarray, sigma: numpy.ndarray, iter_count: int, cov_type: str, cov_config: dict[str, bool | float], cov_est: linearmodels.system.covariance.GMMHeteroskedasticCovariance | linearmodels.system.covariance.GMMHomoskedasticCovariance) -> linearmodels.system.results.GMMSystemResults
|
42,961 |
linearmodels.system.model
|
_j_statistic
|
J stat and test
Parameters
----------
params : ndarray
Estimated model parameters
weight_mat : ndarray
Weighting matrix used in estimation of the parameters
Returns
-------
stat : WaldTestStatistic
Test statistic
Notes
-----
Assumes that the efficient weighting matrix has been used. Using
other weighting matrices will not produce the correct test.
|
def _j_statistic(
self, params: Float64Array, weight_mat: Float64Array
) -> WaldTestStatistic:
"""
J stat and test
Parameters
----------
params : ndarray
Estimated model parameters
weight_mat : ndarray
Weighting matrix used in estimation of the parameters
Returns
-------
stat : WaldTestStatistic
Test statistic
Notes
-----
Assumes that the efficient weighting matrix has been used. Using
other weighting matrices will not produce the correct test.
"""
y, x, z = self._wy, self._wx, self._wz
k = len(x)
ze_lst = []
idx = 0
for i in range(k):
kx = x[i].shape[1]
beta = params[idx : idx + kx]
eps = y[i] - x[i] @ beta
ze_lst.append(z[i] * eps)
idx += kx
ze = np.concatenate(ze_lst, 1)
g_bar = ze.mean(0)
nobs = x[0].shape[0]
stat = float(nobs * g_bar.T @ np.linalg.inv(weight_mat) @ g_bar.T)
null = "Expected moment conditions are equal to 0"
ninstr = sum(map(lambda a: a.shape[1], z))
nvar = sum(map(lambda a: a.shape[1], x))
ncons = 0 if self.constraints is None else self.constraints.r.shape[0]
return WaldTestStatistic(stat, null, ninstr - (nvar - ncons))
|
(self, params: numpy.ndarray, weight_mat: numpy.ndarray) -> linearmodels.shared.hypotheses.WaldTestStatistic
|
42,968 |
linearmodels.system.model
|
fit
|
Estimate model parameters
Parameters
----------
iter_limit : int
Maximum number of iterations for iterative GLS
tol : float
Tolerance to use when checking for convergence in iterative GLS
initial_weight : ndarray
Initial weighting matrix to use in the first step. If not
specified, uses the average outer-product of the set containing
the exogenous variables and instruments.
cov_type : str
Name of covariance estimator. Valid options are
* "unadjusted", "homoskedastic" - Classic covariance estimator
* "robust", "heteroskedastic" - Heteroskedasticity robust
covariance estimator
**cov_config
Additional parameters to pass to covariance estimator. All
estimators support debiased which employs a small-sample adjustment
Returns
-------
GMMSystemResults
Estimation results
|
def fit(
self,
*,
iter_limit: int = 2,
tol: float = 1e-6,
initial_weight: Float64Array | None = None,
cov_type: str = "robust",
**cov_config: bool | float,
) -> GMMSystemResults:
"""
Estimate model parameters
Parameters
----------
iter_limit : int
Maximum number of iterations for iterative GLS
tol : float
Tolerance to use when checking for convergence in iterative GLS
initial_weight : ndarray
Initial weighting matrix to use in the first step. If not
specified, uses the average outer-product of the set containing
the exogenous variables and instruments.
cov_type : str
Name of covariance estimator. Valid options are
* "unadjusted", "homoskedastic" - Classic covariance estimator
* "robust", "heteroskedastic" - Heteroskedasticity robust
covariance estimator
**cov_config
Additional parameters to pass to covariance estimator. All
estimators support debiased which employs a small-sample adjustment
Returns
-------
GMMSystemResults
Estimation results
"""
if cov_type not in COV_TYPES:
raise ValueError(f"Unknown cov_type: {cov_type}")
# Parameter estimation
wx, wy, wz = self._wx, self._wy, self._wz
k = len(wx)
nobs = wx[0].shape[0]
k_total = sum(map(lambda a: a.shape[1], wz))
if initial_weight is None:
w = blocked_inner_prod(wz, np.eye(k_total)) / nobs
else:
w = initial_weight
assert w is not None
beta_last = beta = self._blocked_gmm(
wx, wy, wz, w=cast(Float64Array, w), constraints=self.constraints
)
_eps = []
loc = 0
for i in range(k):
nb = wx[i].shape[1]
b = beta[loc : loc + nb]
_eps.append(wy[i] - wx[i] @ b)
loc += nb
eps = np.hstack(_eps)
sigma = self._weight_est.sigma(eps, wx) if self._sigma is None else self._sigma
vinv = None
iters = 1
norm = 10 * tol + 1
while iters < iter_limit and norm > tol:
sigma = (
self._weight_est.sigma(eps, wx) if self._sigma is None else self._sigma
)
w = self._weight_est.weight_matrix(wx, wz, eps, sigma=sigma)
beta = self._blocked_gmm(
wx, wy, wz, w=cast(Float64Array, w), constraints=self.constraints
)
delta = beta_last - beta
if vinv is None:
winv = np.linalg.inv(w)
xpz = blocked_cross_prod(wx, wz, np.eye(k))
xpz = cast(Float64Array, xpz / nobs)
v = (xpz @ winv @ xpz.T) / nobs
vinv = inv(v)
norm = float(np.squeeze(delta.T @ vinv @ delta))
beta_last = beta
_eps = []
loc = 0
for i in range(k):
nb = wx[i].shape[1]
b = beta[loc : loc + nb]
_eps.append(wy[i] - wx[i] @ b)
loc += nb
eps = np.hstack(_eps)
iters += 1
cov_type = COV_TYPES[cov_type]
cov_est = GMM_COV_EST[cov_type]
cov = cov_est(
wx, wz, eps, w, sigma=sigma, constraints=self._constraints, **cov_config
)
weps = eps
_eps = []
loc = 0
x, y = self._x, self._y
for i in range(k):
nb = x[i].shape[1]
b = beta[loc : loc + nb]
_eps.append(y[i] - x[i] @ b)
loc += nb
eps = np.hstack(_eps)
iters += 1
return self._finalize_results(
beta,
cov.cov,
weps,
eps,
cast(np.ndarray, w),
sigma,
iters - 1,
cov_type,
cov_config,
cov,
)
|
(self, *, iter_limit: int = 2, tol: float = 1e-06, initial_weight: Optional[numpy.ndarray] = None, cov_type: str = 'robust', **cov_config: bool | float) -> linearmodels.system.results.GMMSystemResults
|
42,971 |
linearmodels.asset_pricing.model
|
LinearFactorModel
|
Linear factor model estimator
Parameters
----------
portfolios : array_like
Test portfolio returns (nobs by nportfolio)
factors : array_like
Priced factor returns (nobs by nfactor)
risk_free : bool
Flag indicating whether the risk-free rate should be estimated
from returns along other risk premia. If False, the returns are
assumed to be excess returns using the correct risk-free rate.
sigma : array_like
Positive definite residual covariance (nportfolio by nportfolio)
Notes
-----
Suitable for traded or non-traded factors.
Implements a 2-step estimator of risk premia, factor loadings and model
tests.
The first stage model estimated is
.. math::
r_{it} = c_i + f_t \beta_i + \epsilon_{it}
where :math:`r_{it}` is the return on test portfolio i and
:math:`f_t` are the traded factor returns. The parameters :math:`c_i`
are required to allow non-traded to be tested, but are not economically
interesting. These are not reported.
The second stage model uses the estimated factor loadings from the first
and is
.. math::
\bar{r}_i = \lambda_0 + \hat{\beta}_i^\prime \lambda + \eta_i
where :math:`\bar{r}_i` is the average excess return to portfolio i and
:math:`\lambda_0` is only included if estimating the risk-free rate. GLS
is used in the second stage if ``sigma`` is provided.
The model is tested using the estimated values
:math:`\hat{\alpha}_i=\hat{\eta}_i`.
|
class LinearFactorModel(_LinearFactorModelBase):
r"""
Linear factor model estimator
Parameters
----------
portfolios : array_like
Test portfolio returns (nobs by nportfolio)
factors : array_like
Priced factor returns (nobs by nfactor)
risk_free : bool
Flag indicating whether the risk-free rate should be estimated
from returns along other risk premia. If False, the returns are
assumed to be excess returns using the correct risk-free rate.
sigma : array_like
Positive definite residual covariance (nportfolio by nportfolio)
Notes
-----
Suitable for traded or non-traded factors.
Implements a 2-step estimator of risk premia, factor loadings and model
tests.
The first stage model estimated is
.. math::
r_{it} = c_i + f_t \beta_i + \epsilon_{it}
where :math:`r_{it}` is the return on test portfolio i and
:math:`f_t` are the traded factor returns. The parameters :math:`c_i`
are required to allow non-traded to be tested, but are not economically
interesting. These are not reported.
The second stage model uses the estimated factor loadings from the first
and is
.. math::
\bar{r}_i = \lambda_0 + \hat{\beta}_i^\prime \lambda + \eta_i
where :math:`\bar{r}_i` is the average excess return to portfolio i and
:math:`\lambda_0` is only included if estimating the risk-free rate. GLS
is used in the second stage if ``sigma`` is provided.
The model is tested using the estimated values
:math:`\hat{\alpha}_i=\hat{\eta}_i`.
"""
def __init__(
self,
portfolios: IVDataLike,
factors: IVDataLike,
*,
risk_free: bool = False,
sigma: ArrayLike | None = None,
) -> None:
super().__init__(portfolios, factors, risk_free=risk_free, sigma=sigma)
@classmethod
def from_formula(
cls,
formula: str,
data: DataFrame,
*,
portfolios: DataFrame | None = None,
risk_free: bool = False,
sigma: ArrayLike | None = None,
) -> LinearFactorModel:
"""
Parameters
----------
formula : str
Formula modified for the syntax described in the notes
data : DataFrame
DataFrame containing the variables used in the formula
portfolios : array_like
Portfolios to be used in the model. If provided, must use formula
syntax containing only factors.
risk_free : bool
Flag indicating whether the risk-free rate should be estimated
from returns along other risk premia. If False, the returns are
assumed to be excess returns using the correct risk-free rate.
sigma : array_like
Positive definite residual covariance (nportfolio by nportfolio)
Returns
-------
LinearFactorModel
Model instance
Notes
-----
The formula can be used in one of two ways. The first specified only the
factors and uses the data provided in ``portfolios`` as the test portfolios.
The second specified the portfolio using ``+`` to separate the test portfolios
and ``~`` to separate the test portfolios from the factors.
Examples
--------
>>> from linearmodels.datasets import french
>>> from linearmodels.asset_pricing import LinearFactorModel
>>> data = french.load()
>>> formula = "S1M1 + S1M5 + S3M3 + S5M1 + S5M5 ~ MktRF + SMB + HML"
>>> mod = LinearFactorModel.from_formula(formula, data)
Using only factors
>>> portfolios = data[["S1M1", "S1M5", "S3M1", "S3M5", "S5M1", "S5M5"]]
>>> formula = "MktRF + SMB + HML"
>>> mod = LinearFactorModel.from_formula(formula, data, portfolios=portfolios)
"""
factors, portfolios, formula = cls._prepare_data_from_formula(
formula, data, portfolios
)
mod = cls(portfolios, factors, risk_free=risk_free, sigma=sigma)
mod.formula = formula
return mod
def fit(
self,
cov_type: str = "robust",
debiased: bool = True,
**cov_config: bool | int | str,
) -> LinearFactorModelResults:
"""
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
LinearFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
"""
nobs, nf, nport, nrf, s1, s2, s3 = self._boundaries()
excess_returns = not self._risk_free
f = self.factors.ndarray
p = self.portfolios.ndarray
nport = p.shape[1]
# Step 1, n regressions to get B
fc = np.c_[np.ones((nobs, 1)), f]
b = lstsq(fc, p, rcond=None)[0] # nf+1 by np
eps = p - fc @ b
if excess_returns:
betas = b[1:].T
else:
betas = b.T.copy()
betas[:, 0] = 1.0
sigma_m12 = self._sigma_m12
lam = lstsq(sigma_m12 @ betas, sigma_m12 @ p.mean(0)[:, None], rcond=None)[0]
expected = betas @ lam
pricing_errors = p - expected.T
# Moments
alphas = pricing_errors.mean(0)[:, None]
moments = self._moments(eps, betas, alphas, pricing_errors)
# Jacobian
jacobian = self._jacobian(betas, lam, alphas)
if cov_type not in ("robust", "heteroskedastic", "kernel"):
raise ValueError(f"Unknown weight: {cov_type}")
if cov_type in ("robust", "heteroskedastic"):
cov_est_inst = HeteroskedasticCovariance(
moments,
jacobian=jacobian,
center=False,
debiased=debiased,
df=fc.shape[1],
)
else: # "kernel":
bandwidth = get_float(cov_config, "bandwidth")
kernel = get_string(cov_config, "kernel")
cov_est_inst = KernelCovariance(
moments,
jacobian=jacobian,
center=False,
debiased=debiased,
df=fc.shape[1],
kernel=kernel,
bandwidth=bandwidth,
)
# VCV
full_vcv = cov_est_inst.cov
alpha_vcv = full_vcv[s2:, s2:]
stat = float(np.squeeze(alphas.T @ np.linalg.pinv(alpha_vcv) @ alphas))
jstat = WaldTestStatistic(
stat, "All alphas are 0", nport - nf - nrf, name="J-statistic"
)
total_ss = ((p - p.mean(0)[None, :]) ** 2).sum()
residual_ss = (eps**2).sum()
r2 = 1 - residual_ss / total_ss
rp = lam
rp_cov = full_vcv[s1:s2, s1:s2]
betas = betas if excess_returns else betas[:, 1:]
params = np.c_[alphas, betas]
param_names = []
for portfolio in self.portfolios.cols:
param_names.append(f"alpha-{portfolio}")
for factor in self.factors.cols:
param_names.append(f"beta-{portfolio}-{factor}")
if not excess_returns:
param_names.append("lambda-risk_free")
for factor in self.factors.cols:
param_names.append(f"lambda-{factor}")
# Pivot vcv to remove unnecessary and have correct order
order = np.reshape(np.arange(s1), (nport, nf + 1))
order[:, 0] = np.arange(s2, s3)
order = order.ravel()
order = np.r_[order, s1:s2]
full_vcv = full_vcv[order][:, order]
factor_names = list(self.factors.cols)
rp_names = factor_names[:]
if not excess_returns:
rp_names.insert(0, "risk_free")
res = AttrDict(
params=params,
cov=full_vcv,
betas=betas,
rp=rp,
rp_cov=rp_cov,
alphas=alphas,
alpha_vcv=alpha_vcv,
jstat=jstat,
rsquared=r2,
total_ss=total_ss,
residual_ss=residual_ss,
param_names=param_names,
portfolio_names=self.portfolios.cols,
factor_names=factor_names,
name=self._name,
cov_type=cov_type,
model=self,
nobs=nobs,
rp_names=rp_names,
cov_est=cov_est_inst,
)
return LinearFactorModelResults(res)
def _jacobian(
self, betas: Float64Array, lam: Float64Array, alphas: Float64Array
) -> Float64Array:
nobs, nf, nport, nrf, s1, s2, s3 = self._boundaries()
f = self.factors.ndarray
fc = np.c_[np.ones((nobs, 1)), f]
excess_returns = not self._risk_free
bc = betas
sigma_inv = self._sigma_inv
jac = np.eye((nport * (nf + 1)) + (nf + nrf) + nport)
fpf = fc.T @ fc / nobs
jac[:s1, :s1] = np.kron(np.eye(nport), fpf)
b_tilde = sigma_inv @ bc
alpha_tilde = sigma_inv @ alphas
_lam = lam if excess_returns else lam[1:]
for i in range(nport):
block = np.zeros((nf + nrf, nf + 1))
block[:, 1:] = b_tilde[[i]].T @ _lam.T
block[nrf:, 1:] -= alpha_tilde[i] * np.eye(nf)
jac[s1:s2, (i * (nf + 1)) : ((i + 1) * (nf + 1))] = block
jac[s1:s2, s1:s2] = bc.T @ sigma_inv @ bc
zero_lam = np.r_[[[0]], _lam]
jac[s2:s3, :s1] = np.kron(np.eye(nport), zero_lam.T)
jac[s2:s3, s1:s2] = bc
return jac
def _moments(
self,
eps: Float64Array,
betas: Float64Array,
alphas: Float64Array,
pricing_errors: Float64Array,
) -> Float64Array:
sigma_inv = self._sigma_inv
f = self.factors.ndarray
nobs, nf, nport, _, s1, s2, s3 = self._boundaries()
fc = np.c_[np.ones((nobs, 1)), f]
f_rep = np.tile(fc, (1, nport))
eps_rep = np.tile(eps, (nf + 1, 1))
eps_rep = np.reshape(eps_rep.T, (nport * (nf + 1), nobs)).T
# Moments
g1 = f_rep * eps_rep
g2 = pricing_errors @ sigma_inv @ betas
g3 = pricing_errors - alphas.T
return np.c_[g1, g2, g3]
|
(portfolios: 'IVDataLike', factors: 'IVDataLike', *, risk_free: 'bool' = False, sigma: 'ArrayLike | None' = None) -> 'None'
|
42,972 |
linearmodels.asset_pricing.model
|
__init__
| null |
def __init__(
self,
portfolios: IVDataLike,
factors: IVDataLike,
*,
risk_free: bool = False,
sigma: ArrayLike | None = None,
) -> None:
super().__init__(portfolios, factors, risk_free=risk_free, sigma=sigma)
|
(self, portfolios: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], factors: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, risk_free: bool = False, sigma: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None) -> NoneType
|
42,974 |
linearmodels.asset_pricing.model
|
__str__
| null |
def __str__(self) -> str:
out = super().__str__()
if np.any(self._sigma != np.eye(self.portfolios.shape[1])):
out += " using GLS"
out += f"\nEstimated risk-free rate: {self._risk_free}"
return out
|
(self) -> str
|
42,975 |
linearmodels.asset_pricing.model
|
_boundaries
| null |
def _boundaries(self) -> tuple[int, int, int, int, int, int, int]:
nobs, nf = self.factors.ndarray.shape
nport = self.portfolios.ndarray.shape[1]
nrf = int(bool(self._risk_free))
s1 = (nf + 1) * nport
s2 = s1 + (nf + nrf)
s3 = s2 + nport
return nobs, nf, nport, nrf, s1, s2, s3
|
(self) -> tuple[int, int, int, int, int, int, int]
|
42,976 |
linearmodels.asset_pricing.model
|
_drop_missing
| null |
def _drop_missing(self) -> BoolArray:
data = (self.portfolios, self.factors)
missing = cast(BoolArray, np.any(np.c_[[dh.isnull for dh in data]], 0))
if any(missing):
if all(missing):
raise ValueError(
"All observations contain missing data. "
"Model cannot be estimated."
)
self.portfolios.drop(missing)
self.factors.drop(missing)
missing_warning(missing)
return missing
|
(self) -> numpy.ndarray
|
42,977 |
linearmodels.asset_pricing.model
|
_jacobian
| null |
def _jacobian(
self, betas: Float64Array, lam: Float64Array, alphas: Float64Array
) -> Float64Array:
nobs, nf, nport, nrf, s1, s2, s3 = self._boundaries()
f = self.factors.ndarray
fc = np.c_[np.ones((nobs, 1)), f]
excess_returns = not self._risk_free
bc = betas
sigma_inv = self._sigma_inv
jac = np.eye((nport * (nf + 1)) + (nf + nrf) + nport)
fpf = fc.T @ fc / nobs
jac[:s1, :s1] = np.kron(np.eye(nport), fpf)
b_tilde = sigma_inv @ bc
alpha_tilde = sigma_inv @ alphas
_lam = lam if excess_returns else lam[1:]
for i in range(nport):
block = np.zeros((nf + nrf, nf + 1))
block[:, 1:] = b_tilde[[i]].T @ _lam.T
block[nrf:, 1:] -= alpha_tilde[i] * np.eye(nf)
jac[s1:s2, (i * (nf + 1)) : ((i + 1) * (nf + 1))] = block
jac[s1:s2, s1:s2] = bc.T @ sigma_inv @ bc
zero_lam = np.r_[[[0]], _lam]
jac[s2:s3, :s1] = np.kron(np.eye(nport), zero_lam.T)
jac[s2:s3, s1:s2] = bc
return jac
|
(self, betas: numpy.ndarray, lam: numpy.ndarray, alphas: numpy.ndarray) -> numpy.ndarray
|
42,978 |
linearmodels.asset_pricing.model
|
_moments
| null |
def _moments(
self,
eps: Float64Array,
betas: Float64Array,
alphas: Float64Array,
pricing_errors: Float64Array,
) -> Float64Array:
sigma_inv = self._sigma_inv
f = self.factors.ndarray
nobs, nf, nport, _, s1, s2, s3 = self._boundaries()
fc = np.c_[np.ones((nobs, 1)), f]
f_rep = np.tile(fc, (1, nport))
eps_rep = np.tile(eps, (nf + 1, 1))
eps_rep = np.reshape(eps_rep.T, (nport * (nf + 1), nobs)).T
# Moments
g1 = f_rep * eps_rep
g2 = pricing_errors @ sigma_inv @ betas
g3 = pricing_errors - alphas.T
return np.c_[g1, g2, g3]
|
(self, eps: numpy.ndarray, betas: numpy.ndarray, alphas: numpy.ndarray, pricing_errors: numpy.ndarray) -> numpy.ndarray
|
42,979 |
linearmodels.asset_pricing.model
|
_prepare_data_from_formula
| null |
@staticmethod
def _prepare_data_from_formula(
formula: str, data: DataFrame, portfolios: DataFrame | None
) -> tuple[DataFrame, DataFrame, str]:
orig_formula = formula
na_action = NAAction("raise")
if portfolios is not None:
factors_mm = model_matrix(
formula + " + 0",
data,
context=0, # TODO: self._eval_env,
ensure_full_rank=True,
na_action=na_action,
)
factors = DataFrame(factors_mm)
else:
formula_components = formula.split("~")
portfolios_mm = model_matrix(
formula_components[0].strip() + " + 0",
data,
context=0, # TODO: self._eval_env,
ensure_full_rank=False,
na_action=na_action,
)
portfolios = DataFrame(portfolios_mm)
factors_mm = model_matrix(
formula_components[1].strip() + " + 0",
data,
context=0, # TODO: self._eval_env,
ensure_full_rank=False,
na_action=na_action,
)
factors = DataFrame(factors_mm)
return factors, portfolios, orig_formula
|
(formula: str, data: pandas.core.frame.DataFrame, portfolios: pandas.core.frame.DataFrame | None) -> tuple[pandas.core.frame.DataFrame, pandas.core.frame.DataFrame, str]
|
42,980 |
linearmodels.asset_pricing.model
|
_validate_additional_data
| null |
def _validate_additional_data(self) -> None:
f = self.factors.ndarray
p = self.portfolios.ndarray
nrp = f.shape[1] + int(self._risk_free)
if p.shape[1] < nrp:
raise ValueError(
"The number of test portfolio must be at least as "
"large as the number of risk premia, including the "
"risk free rate if estimated."
)
|
(self) -> NoneType
|
42,981 |
linearmodels.asset_pricing.model
|
_validate_data
| null |
def _validate_data(self) -> None:
p = self.portfolios.ndarray
f = self.factors.ndarray
if p.shape[0] != f.shape[0]:
raise ValueError(
"The number of observations in portfolios and "
"factors is not the same."
)
self._drop_missing()
p = cast(Float64Array, self.portfolios.ndarray)
f = cast(Float64Array, self.factors.ndarray)
if has_constant(p)[0]:
raise ValueError(
"portfolios must not contains a constant or "
"equivalent and must not have rank\n"
"less than the dimension of the smaller shape."
)
if has_constant(f)[0]:
raise ValueError("factors must not contain a constant or equivalent.")
if np.linalg.matrix_rank(f) < f.shape[1]:
raise ValueError(
"Model cannot be estimated. factors do not have full column rank."
)
if p.shape[0] < (f.shape[1] + 1):
raise ValueError(
"Model cannot be estimated. portfolios must have factors + 1 or "
"more returns to\nestimate the model parameters."
)
|
(self) -> NoneType
|
42,982 |
linearmodels.asset_pricing.model
|
fit
|
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
LinearFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
|
def fit(
self,
cov_type: str = "robust",
debiased: bool = True,
**cov_config: bool | int | str,
) -> LinearFactorModelResults:
"""
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
LinearFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
"""
nobs, nf, nport, nrf, s1, s2, s3 = self._boundaries()
excess_returns = not self._risk_free
f = self.factors.ndarray
p = self.portfolios.ndarray
nport = p.shape[1]
# Step 1, n regressions to get B
fc = np.c_[np.ones((nobs, 1)), f]
b = lstsq(fc, p, rcond=None)[0] # nf+1 by np
eps = p - fc @ b
if excess_returns:
betas = b[1:].T
else:
betas = b.T.copy()
betas[:, 0] = 1.0
sigma_m12 = self._sigma_m12
lam = lstsq(sigma_m12 @ betas, sigma_m12 @ p.mean(0)[:, None], rcond=None)[0]
expected = betas @ lam
pricing_errors = p - expected.T
# Moments
alphas = pricing_errors.mean(0)[:, None]
moments = self._moments(eps, betas, alphas, pricing_errors)
# Jacobian
jacobian = self._jacobian(betas, lam, alphas)
if cov_type not in ("robust", "heteroskedastic", "kernel"):
raise ValueError(f"Unknown weight: {cov_type}")
if cov_type in ("robust", "heteroskedastic"):
cov_est_inst = HeteroskedasticCovariance(
moments,
jacobian=jacobian,
center=False,
debiased=debiased,
df=fc.shape[1],
)
else: # "kernel":
bandwidth = get_float(cov_config, "bandwidth")
kernel = get_string(cov_config, "kernel")
cov_est_inst = KernelCovariance(
moments,
jacobian=jacobian,
center=False,
debiased=debiased,
df=fc.shape[1],
kernel=kernel,
bandwidth=bandwidth,
)
# VCV
full_vcv = cov_est_inst.cov
alpha_vcv = full_vcv[s2:, s2:]
stat = float(np.squeeze(alphas.T @ np.linalg.pinv(alpha_vcv) @ alphas))
jstat = WaldTestStatistic(
stat, "All alphas are 0", nport - nf - nrf, name="J-statistic"
)
total_ss = ((p - p.mean(0)[None, :]) ** 2).sum()
residual_ss = (eps**2).sum()
r2 = 1 - residual_ss / total_ss
rp = lam
rp_cov = full_vcv[s1:s2, s1:s2]
betas = betas if excess_returns else betas[:, 1:]
params = np.c_[alphas, betas]
param_names = []
for portfolio in self.portfolios.cols:
param_names.append(f"alpha-{portfolio}")
for factor in self.factors.cols:
param_names.append(f"beta-{portfolio}-{factor}")
if not excess_returns:
param_names.append("lambda-risk_free")
for factor in self.factors.cols:
param_names.append(f"lambda-{factor}")
# Pivot vcv to remove unnecessary and have correct order
order = np.reshape(np.arange(s1), (nport, nf + 1))
order[:, 0] = np.arange(s2, s3)
order = order.ravel()
order = np.r_[order, s1:s2]
full_vcv = full_vcv[order][:, order]
factor_names = list(self.factors.cols)
rp_names = factor_names[:]
if not excess_returns:
rp_names.insert(0, "risk_free")
res = AttrDict(
params=params,
cov=full_vcv,
betas=betas,
rp=rp,
rp_cov=rp_cov,
alphas=alphas,
alpha_vcv=alpha_vcv,
jstat=jstat,
rsquared=r2,
total_ss=total_ss,
residual_ss=residual_ss,
param_names=param_names,
portfolio_names=self.portfolios.cols,
factor_names=factor_names,
name=self._name,
cov_type=cov_type,
model=self,
nobs=nobs,
rp_names=rp_names,
cov_est=cov_est_inst,
)
return LinearFactorModelResults(res)
|
(self, cov_type: str = 'robust', debiased: bool = True, **cov_config: bool | int | str) -> linearmodels.asset_pricing.results.LinearFactorModelResults
|
42,983 |
linearmodels.asset_pricing.model
|
LinearFactorModelGMM
|
GMM estimator of Linear factor models
Parameters
----------
portfolios : array_like
Test portfolio returns (nobs by nportfolio)
factors : array_like
Priced factors values (nobs by nfactor)
risk_free : bool
Flag indicating whether the risk-free rate should be estimated
from returns along other risk premia. If False, the returns are
assumed to be excess returns using the correct risk-free rate.
Notes
-----
Suitable for traded or non-traded factors.
Implements a GMM estimator of risk premia, factor loadings and model
tests.
The moments are
.. math::
\left[\begin{array}{c}
\epsilon_{t}\otimes f_{c,t}\\
f_{t}-\mu
\end{array}\right]
and
.. math::
\epsilon_{t}=r_{t}-\left[1_{N}\;\beta\right]\lambda-\beta\left(f_{t}-\mu\right)
where :math:`r_{it}` is the return on test portfolio i and
:math:`f_t` are the factor returns.
The model is tested using the optimized objective function using the
usual GMM J statistic.
|
class LinearFactorModelGMM(_LinearFactorModelBase):
r"""
GMM estimator of Linear factor models
Parameters
----------
portfolios : array_like
Test portfolio returns (nobs by nportfolio)
factors : array_like
Priced factors values (nobs by nfactor)
risk_free : bool
Flag indicating whether the risk-free rate should be estimated
from returns along other risk premia. If False, the returns are
assumed to be excess returns using the correct risk-free rate.
Notes
-----
Suitable for traded or non-traded factors.
Implements a GMM estimator of risk premia, factor loadings and model
tests.
The moments are
.. math::
\left[\begin{array}{c}
\epsilon_{t}\otimes f_{c,t}\\
f_{t}-\mu
\end{array}\right]
and
.. math::
\epsilon_{t}=r_{t}-\left[1_{N}\;\beta\right]\lambda-\beta\left(f_{t}-\mu\right)
where :math:`r_{it}` is the return on test portfolio i and
:math:`f_t` are the factor returns.
The model is tested using the optimized objective function using the
usual GMM J statistic.
"""
def __init__(
self, portfolios: IVDataLike, factors: IVDataLike, *, risk_free: bool = False
) -> None:
super().__init__(portfolios, factors, risk_free=risk_free)
@classmethod
def from_formula(
cls,
formula: str,
data: DataFrame,
*,
portfolios: DataFrame | None = None,
risk_free: bool = False,
) -> LinearFactorModelGMM:
"""
Parameters
----------
formula : str
Formula modified for the syntax described in the notes
data : DataFrame
DataFrame containing the variables used in the formula
portfolios : array_like
Portfolios to be used in the model. If provided, must use formula
syntax containing only factors.
risk_free : bool
Flag indicating whether the risk-free rate should be estimated
from returns along other risk premia. If False, the returns are
assumed to be excess returns using the correct risk-free rate.
Returns
-------
LinearFactorModelGMM
Model instance
Notes
-----
The formula can be used in one of two ways. The first specified only the
factors and uses the data provided in ``portfolios`` as the test portfolios.
The second specified the portfolio using ``+`` to separate the test portfolios
and ``~`` to separate the test portfolios from the factors.
Examples
--------
>>> from linearmodels.datasets import french
>>> from linearmodels.asset_pricing import LinearFactorModel
>>> data = french.load()
>>> formula = "S1M1 + S1M5 + S3M3 + S5M1 + S5M5 ~ MktRF + SMB + HML"
>>> mod = LinearFactorModel.from_formula(formula, data)
Using only factors
>>> portfolios = data[["S1M1", "S1M5", "S3M1", "S3M5", "S5M1", "S5M5"]]
>>> formula = "MktRF + SMB + HML"
>>> mod = LinearFactorModel.from_formula(formula, data, portfolios=portfolios)
"""
factors, portfolios, formula = cls._prepare_data_from_formula(
formula, data, portfolios
)
mod = cls(portfolios, factors, risk_free=risk_free)
mod.formula = formula
return mod
def fit(
self,
*,
center: bool = True,
use_cue: bool = False,
steps: int = 2,
disp: int = 10,
max_iter: int = 1000,
cov_type: str = "robust",
debiased: bool = True,
starting: ArrayLike | None = None,
opt_options: dict[str, Any] | None = None,
**cov_config: bool | int | str,
) -> GMMFactorModelResults:
"""
Estimate model parameters
Parameters
----------
center : bool
Flag indicating to center the moment conditions before computing
the weighting matrix.
use_cue : bool
Flag indicating to use continuously updating estimator
steps : int
Number of steps to use when estimating parameters. 2 corresponds
to the standard efficient GMM estimator. Higher values will
iterate until convergence or up to the number of steps given
disp : int
Number of iterations between printed update. 0 or negative values
suppresses output
max_iter : int
Maximum number of iterations when minimizing objective. Must be positive.
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
starting : array_like
Starting values to use in optimization. If not provided, 2SLS
estimates are used.
opt_options : dict
Additional options to pass to scipy.optimize.minimize when
optimizing the objective function. If not provided, defers to
scipy to choose an appropriate optimizer. All minimize inputs
except ``fun``, ``x0``, and ``args`` can be overridden.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
GMMFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
"""
nobs, n = self.portfolios.shape
k = self.factors.shape[1]
excess_returns = not self._risk_free
nrf = int(not bool(excess_returns))
# 1. Starting Values - use 2 pass
mod = LinearFactorModel(
self.portfolios, self.factors, risk_free=self._risk_free
)
res = mod.fit()
betas = np.asarray(res.betas).ravel()
lam = np.asarray(res.risk_premia)
mu = self.factors.ndarray.mean(0)
sv = np.r_[betas, lam, mu][:, None]
if starting is not None:
starting = np.asarray(starting)
if starting.ndim == 1:
starting = starting[:, None]
if starting.shape != sv.shape:
raise ValueError(f"Starting values must have {sv.shape} elements.")
sv = starting
g = self._moments(sv, excess_returns)
g -= g.mean(0)[None, :] if center else 0
kernel: str | None = None
bandwidth: float | None = None
if cov_type not in ("robust", "heteroskedastic", "kernel"):
raise ValueError(f"Unknown weight: {cov_type}")
if cov_type in ("robust", "heteroskedastic"):
weight_est_instance = HeteroskedasticWeight(g, center=center)
cov_est = HeteroskedasticCovariance
else: # "kernel":
kernel = get_string(cov_config, "kernel")
bandwidth = get_float(cov_config, "bandwidth")
weight_est_instance = KernelWeight(
g, center=center, kernel=kernel, bandwidth=bandwidth
)
cov_est = KernelCovariance
w = weight_est_instance.w(g)
args = (excess_returns, w)
# 2. Step 1 using w = inv(s) from SV
callback = callback_factory(self._j, args, disp=disp)
_default_options: dict[str, Any] = {"callback": callback}
options = {"disp": bool(disp), "maxiter": max_iter}
opt_options = {} if opt_options is None else opt_options
options.update(opt_options.get("options", {}))
_default_options.update(opt_options)
_default_options["options"] = options
opt_res = minimize(
fun=self._j,
x0=np.squeeze(sv),
args=args,
**_default_options,
)
params = opt_res.x
last_obj = opt_res.fun
iters = 1
# 3. Step 2 using step 1 estimates
if not use_cue:
while iters < steps:
iters += 1
g = self._moments(params, excess_returns)
w = weight_est_instance.w(g)
args = (excess_returns, w)
# 2. Step 1 using w = inv(s) from SV
callback = callback_factory(self._j, args, disp=disp)
opt_res = minimize(
self._j,
params,
args=args,
callback=callback,
options={"disp": bool(disp), "maxiter": max_iter},
)
params = opt_res.x
obj = opt_res.fun
if np.abs(obj - last_obj) < 1e-6:
break
last_obj = obj
else:
cue_args = (excess_returns, weight_est_instance)
callback = callback_factory(self._j_cue, cue_args, disp=disp)
opt_res = minimize(
self._j_cue,
params,
args=cue_args,
callback=callback,
options={"disp": bool(disp), "maxiter": max_iter},
)
params = opt_res.x
# 4. Compute final S and G for inference
g = self._moments(params, excess_returns)
s = g.T @ g / nobs
jac = self._jacobian(params, excess_returns)
if cov_est is HeteroskedasticCovariance:
cov_est_inst = HeteroskedasticCovariance(
g,
jacobian=jac,
center=center,
debiased=debiased,
df=self.factors.shape[1],
)
else:
cov_est_inst = KernelCovariance(
g,
jacobian=jac,
center=center,
debiased=debiased,
df=self.factors.shape[1],
kernel=kernel,
bandwidth=bandwidth,
)
full_vcv = cov_est_inst.cov
sel = slice((n * k), (n * k + k + nrf))
rp = params[sel]
rp_cov = full_vcv[sel, sel]
sel = slice(0, (n * (k + 1)), (k + 1))
alphas = g.mean(0)[sel, None]
alpha_vcv = s[sel, sel] / nobs
stat = self._j(params, excess_returns, w)
jstat = WaldTestStatistic(
stat, "All alphas are 0", n - k - nrf, name="J-statistic"
)
# R2 calculation
betas = np.reshape(params[: (n * k)], (n, k))
resids = self.portfolios.ndarray - self.factors.ndarray @ betas.T
resids -= resids.mean(0)[None, :]
residual_ss = (resids**2).sum()
total = self.portfolios.ndarray
total = total - total.mean(0)[None, :]
total_ss = (total**2).sum()
r2 = 1.0 - residual_ss / total_ss
param_names = []
for portfolio in self.portfolios.cols:
for factor in self.factors.cols:
param_names.append(f"beta-{portfolio}-{factor}")
if not excess_returns:
param_names.append("lambda-risk_free")
param_names.extend([f"lambda-{f}" for f in self.factors.cols])
param_names.extend([f"mu-{f}" for f in self.factors.cols])
rp_names = list(self.factors.cols)[:]
if not excess_returns:
rp_names.insert(0, "risk_free")
params = np.c_[alphas, betas]
# 5. Return values
res_dict = AttrDict(
params=params,
cov=full_vcv,
betas=betas,
rp=rp,
rp_cov=rp_cov,
alphas=alphas,
alpha_vcv=alpha_vcv,
jstat=jstat,
rsquared=r2,
total_ss=total_ss,
residual_ss=residual_ss,
param_names=param_names,
portfolio_names=self.portfolios.cols,
factor_names=self.factors.cols,
name=self._name,
cov_type=cov_type,
model=self,
nobs=nobs,
rp_names=rp_names,
iter=iters,
cov_est=cov_est_inst,
)
return GMMFactorModelResults(res_dict)
def _moments(self, parameters: Float64Array, excess_returns: bool) -> Float64Array:
"""Calculate nobs by nmoments moment conditions"""
nrf = int(not excess_returns)
p = np.asarray(self.portfolios.ndarray, dtype=float)
nobs, n = p.shape
f = np.asarray(self.factors.ndarray, dtype=float)
k = f.shape[1]
s1, s2 = n * k, n * k + k + nrf
betas = parameters[:s1]
lam = parameters[s1:s2]
mu = parameters[s2:]
betas = np.reshape(betas, (n, k))
expected = np.c_[np.ones((n, nrf)), betas] @ lam
fe = f - mu.T
eps = p - expected.T - fe @ betas.T
f = np.column_stack((np.ones((nobs, 1)), f))
f = np.tile(f, (1, n))
eps = np.reshape(np.tile(eps, (k + 1, 1)).T, (n * (k + 1), nobs)).T
g = np.c_[eps * f, fe]
return g
def _j(
self, parameters: Float64Array, excess_returns: bool, w: Float64Array
) -> float:
"""Objective function"""
g = self._moments(parameters, excess_returns)
nobs = self.portfolios.shape[0]
gbar = g.mean(0)[:, None]
return nobs * float(np.squeeze(gbar.T @ w @ gbar))
def _j_cue(
self,
parameters: Float64Array,
excess_returns: bool,
weight_est: HeteroskedasticWeight | KernelWeight,
) -> float:
"""CUE Objective function"""
g = self._moments(parameters, excess_returns)
gbar = g.mean(0)[:, None]
nobs = self.portfolios.shape[0]
w = weight_est.w(g)
return nobs * float(np.squeeze(gbar.T @ w @ gbar))
def _jacobian(self, params: Float64Array, excess_returns: bool) -> Float64Array:
"""Jacobian matrix for inference"""
nobs, k = self.factors.shape
n = self.portfolios.shape[1]
nrf = int(bool(not excess_returns))
jac = np.zeros((n * k + n + k, params.shape[0]))
s1, s2 = (n * k), (n * k) + k + nrf
betas = params[:s1]
betas = np.reshape(betas, (n, k))
lam = params[s1:s2]
mu = params[-k:]
lam_tilde = lam if excess_returns else lam[1:]
f = self.factors.ndarray
fe = f - mu.T + lam_tilde.T
f_aug = np.c_[np.ones((nobs, 1)), f]
fef = f_aug.T @ fe / nobs
r1 = n * (k + 1)
jac[:r1, :s1] = np.kron(np.eye(n), fef)
jac12 = np.zeros((r1, (k + nrf)))
jac13 = np.zeros((r1, k))
iota = np.ones((nobs, 1))
for i in range(n):
if excess_returns:
b = betas[[i]]
else:
b = np.c_[[1], betas[[i]]]
jac12[(i * (k + 1)) : (i + 1) * (k + 1)] = f_aug.T @ (iota @ b) / nobs
b = betas[[i]]
jac13[(i * (k + 1)) : (i + 1) * (k + 1)] = -f_aug.T @ (iota @ b) / nobs
jac[:r1, s1:s2] = jac12
jac[:r1, s2:] = jac13
jac[-k:, -k:] = np.eye(k)
return jac
|
(portfolios: 'IVDataLike', factors: 'IVDataLike', *, risk_free: 'bool' = False) -> 'None'
|
42,984 |
linearmodels.asset_pricing.model
|
__init__
| null |
def __init__(
self, portfolios: IVDataLike, factors: IVDataLike, *, risk_free: bool = False
) -> None:
super().__init__(portfolios, factors, risk_free=risk_free)
|
(self, portfolios: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], factors: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, risk_free: bool = False) -> NoneType
|
42,989 |
linearmodels.asset_pricing.model
|
_j
|
Objective function
|
def _j(
self, parameters: Float64Array, excess_returns: bool, w: Float64Array
) -> float:
"""Objective function"""
g = self._moments(parameters, excess_returns)
nobs = self.portfolios.shape[0]
gbar = g.mean(0)[:, None]
return nobs * float(np.squeeze(gbar.T @ w @ gbar))
|
(self, parameters: numpy.ndarray, excess_returns: bool, w: numpy.ndarray) -> float
|
42,990 |
linearmodels.asset_pricing.model
|
_j_cue
|
CUE Objective function
|
def _j_cue(
self,
parameters: Float64Array,
excess_returns: bool,
weight_est: HeteroskedasticWeight | KernelWeight,
) -> float:
"""CUE Objective function"""
g = self._moments(parameters, excess_returns)
gbar = g.mean(0)[:, None]
nobs = self.portfolios.shape[0]
w = weight_est.w(g)
return nobs * float(np.squeeze(gbar.T @ w @ gbar))
|
(self, parameters: numpy.ndarray, excess_returns: bool, weight_est: linearmodels.asset_pricing.covariance.HeteroskedasticWeight | linearmodels.asset_pricing.covariance.KernelWeight) -> float
|
42,991 |
linearmodels.asset_pricing.model
|
_jacobian
|
Jacobian matrix for inference
|
def _jacobian(self, params: Float64Array, excess_returns: bool) -> Float64Array:
"""Jacobian matrix for inference"""
nobs, k = self.factors.shape
n = self.portfolios.shape[1]
nrf = int(bool(not excess_returns))
jac = np.zeros((n * k + n + k, params.shape[0]))
s1, s2 = (n * k), (n * k) + k + nrf
betas = params[:s1]
betas = np.reshape(betas, (n, k))
lam = params[s1:s2]
mu = params[-k:]
lam_tilde = lam if excess_returns else lam[1:]
f = self.factors.ndarray
fe = f - mu.T + lam_tilde.T
f_aug = np.c_[np.ones((nobs, 1)), f]
fef = f_aug.T @ fe / nobs
r1 = n * (k + 1)
jac[:r1, :s1] = np.kron(np.eye(n), fef)
jac12 = np.zeros((r1, (k + nrf)))
jac13 = np.zeros((r1, k))
iota = np.ones((nobs, 1))
for i in range(n):
if excess_returns:
b = betas[[i]]
else:
b = np.c_[[1], betas[[i]]]
jac12[(i * (k + 1)) : (i + 1) * (k + 1)] = f_aug.T @ (iota @ b) / nobs
b = betas[[i]]
jac13[(i * (k + 1)) : (i + 1) * (k + 1)] = -f_aug.T @ (iota @ b) / nobs
jac[:r1, s1:s2] = jac12
jac[:r1, s2:] = jac13
jac[-k:, -k:] = np.eye(k)
return jac
|
(self, params: numpy.ndarray, excess_returns: bool) -> numpy.ndarray
|
42,992 |
linearmodels.asset_pricing.model
|
_moments
|
Calculate nobs by nmoments moment conditions
|
def _moments(self, parameters: Float64Array, excess_returns: bool) -> Float64Array:
"""Calculate nobs by nmoments moment conditions"""
nrf = int(not excess_returns)
p = np.asarray(self.portfolios.ndarray, dtype=float)
nobs, n = p.shape
f = np.asarray(self.factors.ndarray, dtype=float)
k = f.shape[1]
s1, s2 = n * k, n * k + k + nrf
betas = parameters[:s1]
lam = parameters[s1:s2]
mu = parameters[s2:]
betas = np.reshape(betas, (n, k))
expected = np.c_[np.ones((n, nrf)), betas] @ lam
fe = f - mu.T
eps = p - expected.T - fe @ betas.T
f = np.column_stack((np.ones((nobs, 1)), f))
f = np.tile(f, (1, n))
eps = np.reshape(np.tile(eps, (k + 1, 1)).T, (n * (k + 1), nobs)).T
g = np.c_[eps * f, fe]
return g
|
(self, parameters: numpy.ndarray, excess_returns: bool) -> numpy.ndarray
|
42,996 |
linearmodels.asset_pricing.model
|
fit
|
Estimate model parameters
Parameters
----------
center : bool
Flag indicating to center the moment conditions before computing
the weighting matrix.
use_cue : bool
Flag indicating to use continuously updating estimator
steps : int
Number of steps to use when estimating parameters. 2 corresponds
to the standard efficient GMM estimator. Higher values will
iterate until convergence or up to the number of steps given
disp : int
Number of iterations between printed update. 0 or negative values
suppresses output
max_iter : int
Maximum number of iterations when minimizing objective. Must be positive.
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
starting : array_like
Starting values to use in optimization. If not provided, 2SLS
estimates are used.
opt_options : dict
Additional options to pass to scipy.optimize.minimize when
optimizing the objective function. If not provided, defers to
scipy to choose an appropriate optimizer. All minimize inputs
except ``fun``, ``x0``, and ``args`` can be overridden.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
GMMFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
|
def fit(
self,
*,
center: bool = True,
use_cue: bool = False,
steps: int = 2,
disp: int = 10,
max_iter: int = 1000,
cov_type: str = "robust",
debiased: bool = True,
starting: ArrayLike | None = None,
opt_options: dict[str, Any] | None = None,
**cov_config: bool | int | str,
) -> GMMFactorModelResults:
"""
Estimate model parameters
Parameters
----------
center : bool
Flag indicating to center the moment conditions before computing
the weighting matrix.
use_cue : bool
Flag indicating to use continuously updating estimator
steps : int
Number of steps to use when estimating parameters. 2 corresponds
to the standard efficient GMM estimator. Higher values will
iterate until convergence or up to the number of steps given
disp : int
Number of iterations between printed update. 0 or negative values
suppresses output
max_iter : int
Maximum number of iterations when minimizing objective. Must be positive.
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
starting : array_like
Starting values to use in optimization. If not provided, 2SLS
estimates are used.
opt_options : dict
Additional options to pass to scipy.optimize.minimize when
optimizing the objective function. If not provided, defers to
scipy to choose an appropriate optimizer. All minimize inputs
except ``fun``, ``x0``, and ``args`` can be overridden.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
GMMFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
"""
nobs, n = self.portfolios.shape
k = self.factors.shape[1]
excess_returns = not self._risk_free
nrf = int(not bool(excess_returns))
# 1. Starting Values - use 2 pass
mod = LinearFactorModel(
self.portfolios, self.factors, risk_free=self._risk_free
)
res = mod.fit()
betas = np.asarray(res.betas).ravel()
lam = np.asarray(res.risk_premia)
mu = self.factors.ndarray.mean(0)
sv = np.r_[betas, lam, mu][:, None]
if starting is not None:
starting = np.asarray(starting)
if starting.ndim == 1:
starting = starting[:, None]
if starting.shape != sv.shape:
raise ValueError(f"Starting values must have {sv.shape} elements.")
sv = starting
g = self._moments(sv, excess_returns)
g -= g.mean(0)[None, :] if center else 0
kernel: str | None = None
bandwidth: float | None = None
if cov_type not in ("robust", "heteroskedastic", "kernel"):
raise ValueError(f"Unknown weight: {cov_type}")
if cov_type in ("robust", "heteroskedastic"):
weight_est_instance = HeteroskedasticWeight(g, center=center)
cov_est = HeteroskedasticCovariance
else: # "kernel":
kernel = get_string(cov_config, "kernel")
bandwidth = get_float(cov_config, "bandwidth")
weight_est_instance = KernelWeight(
g, center=center, kernel=kernel, bandwidth=bandwidth
)
cov_est = KernelCovariance
w = weight_est_instance.w(g)
args = (excess_returns, w)
# 2. Step 1 using w = inv(s) from SV
callback = callback_factory(self._j, args, disp=disp)
_default_options: dict[str, Any] = {"callback": callback}
options = {"disp": bool(disp), "maxiter": max_iter}
opt_options = {} if opt_options is None else opt_options
options.update(opt_options.get("options", {}))
_default_options.update(opt_options)
_default_options["options"] = options
opt_res = minimize(
fun=self._j,
x0=np.squeeze(sv),
args=args,
**_default_options,
)
params = opt_res.x
last_obj = opt_res.fun
iters = 1
# 3. Step 2 using step 1 estimates
if not use_cue:
while iters < steps:
iters += 1
g = self._moments(params, excess_returns)
w = weight_est_instance.w(g)
args = (excess_returns, w)
# 2. Step 1 using w = inv(s) from SV
callback = callback_factory(self._j, args, disp=disp)
opt_res = minimize(
self._j,
params,
args=args,
callback=callback,
options={"disp": bool(disp), "maxiter": max_iter},
)
params = opt_res.x
obj = opt_res.fun
if np.abs(obj - last_obj) < 1e-6:
break
last_obj = obj
else:
cue_args = (excess_returns, weight_est_instance)
callback = callback_factory(self._j_cue, cue_args, disp=disp)
opt_res = minimize(
self._j_cue,
params,
args=cue_args,
callback=callback,
options={"disp": bool(disp), "maxiter": max_iter},
)
params = opt_res.x
# 4. Compute final S and G for inference
g = self._moments(params, excess_returns)
s = g.T @ g / nobs
jac = self._jacobian(params, excess_returns)
if cov_est is HeteroskedasticCovariance:
cov_est_inst = HeteroskedasticCovariance(
g,
jacobian=jac,
center=center,
debiased=debiased,
df=self.factors.shape[1],
)
else:
cov_est_inst = KernelCovariance(
g,
jacobian=jac,
center=center,
debiased=debiased,
df=self.factors.shape[1],
kernel=kernel,
bandwidth=bandwidth,
)
full_vcv = cov_est_inst.cov
sel = slice((n * k), (n * k + k + nrf))
rp = params[sel]
rp_cov = full_vcv[sel, sel]
sel = slice(0, (n * (k + 1)), (k + 1))
alphas = g.mean(0)[sel, None]
alpha_vcv = s[sel, sel] / nobs
stat = self._j(params, excess_returns, w)
jstat = WaldTestStatistic(
stat, "All alphas are 0", n - k - nrf, name="J-statistic"
)
# R2 calculation
betas = np.reshape(params[: (n * k)], (n, k))
resids = self.portfolios.ndarray - self.factors.ndarray @ betas.T
resids -= resids.mean(0)[None, :]
residual_ss = (resids**2).sum()
total = self.portfolios.ndarray
total = total - total.mean(0)[None, :]
total_ss = (total**2).sum()
r2 = 1.0 - residual_ss / total_ss
param_names = []
for portfolio in self.portfolios.cols:
for factor in self.factors.cols:
param_names.append(f"beta-{portfolio}-{factor}")
if not excess_returns:
param_names.append("lambda-risk_free")
param_names.extend([f"lambda-{f}" for f in self.factors.cols])
param_names.extend([f"mu-{f}" for f in self.factors.cols])
rp_names = list(self.factors.cols)[:]
if not excess_returns:
rp_names.insert(0, "risk_free")
params = np.c_[alphas, betas]
# 5. Return values
res_dict = AttrDict(
params=params,
cov=full_vcv,
betas=betas,
rp=rp,
rp_cov=rp_cov,
alphas=alphas,
alpha_vcv=alpha_vcv,
jstat=jstat,
rsquared=r2,
total_ss=total_ss,
residual_ss=residual_ss,
param_names=param_names,
portfolio_names=self.portfolios.cols,
factor_names=self.factors.cols,
name=self._name,
cov_type=cov_type,
model=self,
nobs=nobs,
rp_names=rp_names,
iter=iters,
cov_est=cov_est_inst,
)
return GMMFactorModelResults(res_dict)
|
(self, *, center: bool = True, use_cue: bool = False, steps: int = 2, disp: int = 10, max_iter: int = 1000, cov_type: str = 'robust', debiased: bool = True, starting: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, opt_options: Optional[dict[str, Any]] = None, **cov_config: bool | int | str) -> linearmodels.asset_pricing.results.GMMFactorModelResults
|
42,997 |
linearmodels.iv.model
|
_OLS
|
Computes OLS estimates when required
Private class used when model reduces to OLS. Should use the statsmodels
version when neeeding a supported public API.
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
weights : array_like
Observation weights used in estimation
Notes
-----
Uses IV2SLS internally by setting endog and instruments to None.
Uses IVLIML with kappa=0 to estimate OLS models.
See Also
--------
statsmodels.regression.linear_model.OLS,
statsmodels.regression.linear_model.GLS
|
class _OLS(IVLIML):
"""
Computes OLS estimates when required
Private class used when model reduces to OLS. Should use the statsmodels
version when neeeding a supported public API.
Parameters
----------
dependent : array_like
Endogenous variables (nobs by 1)
exog : array_like
Exogenous regressors (nobs by nexog)
weights : array_like
Observation weights used in estimation
Notes
-----
Uses IV2SLS internally by setting endog and instruments to None.
Uses IVLIML with kappa=0 to estimate OLS models.
See Also
--------
statsmodels.regression.linear_model.OLS,
statsmodels.regression.linear_model.GLS
"""
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike,
*,
weights: IVDataLike | None = None,
):
super().__init__(dependent, exog, None, None, weights=weights, kappa=0.0)
self._result_container = OLSResults
|
(dependent: 'IVDataLike', exog: 'IVDataLike', *, weights: 'IVDataLike | None' = None)
|
42,998 |
linearmodels.iv.model
|
__init__
| null |
def __init__(
self,
dependent: IVDataLike,
exog: IVDataLike,
*,
weights: IVDataLike | None = None,
):
super().__init__(dependent, exog, None, None, weights=weights, kappa=0.0)
self._result_container = OLSResults
|
(self, dependent: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], exog: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, weights: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None)
|
43,010 |
linearmodels.panel.model
|
PanelOLS
|
One- and two-way fixed effects estimator for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity).
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
entity_effects : bool
Flag whether to include entity (fixed) effects in the model
time_effects : bool
Flag whether to include time effects in the model
other_effects : array_like
Category codes to use for any effects that are not entity or time
effects. Each variable is treated as an effect.
singletons : bool
Flag indicating whether to drop singleton observation
drop_absorbed : bool
Flag indicating whether to drop absorbed variables
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model specification.
Results may be numerically unstable if this check is skipped and
the matrix is not full rank.
Notes
-----
Many models can be estimated. The most common included entity effects and
can be described
.. math::
y_{it} = \alpha_i + \beta^{\prime}x_{it} + \epsilon_{it}
where :math:`\alpha_i` is included if ``entity_effects=True``.
Time effect are also supported, which leads to a model of the form
.. math::
y_{it}= \gamma_t + \beta^{\prime}x_{it} + \epsilon_{it}
where :math:`\gamma_i` is included if ``time_effects=True``.
Both effects can be simultaneously used,
.. math::
y_{it}=\alpha_i + \gamma_t + \beta^{\prime}x_{it} + \epsilon_{it}
Additionally , arbitrary effects can be specified using categorical variables.
If both ``entity_effect`` and ``time_effects`` are ``False``, and no other
effects are included, the model reduces to :class:`PooledOLS`.
Model supports at most 2 effects. These can be entity-time, entity-other,
time-other or 2 other.
|
class PanelOLS(_PanelModelBase):
r"""
One- and two-way fixed effects estimator for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity).
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
entity_effects : bool
Flag whether to include entity (fixed) effects in the model
time_effects : bool
Flag whether to include time effects in the model
other_effects : array_like
Category codes to use for any effects that are not entity or time
effects. Each variable is treated as an effect.
singletons : bool
Flag indicating whether to drop singleton observation
drop_absorbed : bool
Flag indicating whether to drop absorbed variables
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model specification.
Results may be numerically unstable if this check is skipped and
the matrix is not full rank.
Notes
-----
Many models can be estimated. The most common included entity effects and
can be described
.. math::
y_{it} = \alpha_i + \beta^{\prime}x_{it} + \epsilon_{it}
where :math:`\alpha_i` is included if ``entity_effects=True``.
Time effect are also supported, which leads to a model of the form
.. math::
y_{it}= \gamma_t + \beta^{\prime}x_{it} + \epsilon_{it}
where :math:`\gamma_i` is included if ``time_effects=True``.
Both effects can be simultaneously used,
.. math::
y_{it}=\alpha_i + \gamma_t + \beta^{\prime}x_{it} + \epsilon_{it}
Additionally , arbitrary effects can be specified using categorical variables.
If both ``entity_effect`` and ``time_effects`` are ``False``, and no other
effects are included, the model reduces to :class:`PooledOLS`.
Model supports at most 2 effects. These can be entity-time, entity-other,
time-other or 2 other.
"""
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: PanelDataLike | None = None,
entity_effects: bool = False,
time_effects: bool = False,
other_effects: PanelDataLike | None = None,
singletons: bool = True,
drop_absorbed: bool = False,
check_rank: bool = True,
) -> None:
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
self._entity_effects = entity_effects
self._time_effects = time_effects
self._other_effect_cats: PanelData | None = None
self._singletons = singletons
self._other_effects = self._validate_effects(other_effects)
self._has_effect = entity_effects or time_effects or self.other_effects
self._drop_absorbed = drop_absorbed
self._singleton_index = None
self._drop_singletons()
def _collect_effects(self) -> NumericArray:
if not self._has_effect:
return np.empty((self.dependent.shape[0], 0))
effects = []
if self.entity_effects:
effects.append(np.asarray(self.dependent.entity_ids).squeeze())
if self.time_effects:
effects.append(np.asarray(self.dependent.time_ids).squeeze())
if self.other_effects:
assert self._other_effect_cats is not None
other = self._other_effect_cats.dataframe
for col in other:
effects.append(np.asarray(other[col]).squeeze())
return np.column_stack(effects)
def _drop_singletons(self) -> None:
if self._singletons or not self._has_effect:
return
effects = self._collect_effects()
retain = in_2core_graph(effects)
if np.all(retain):
return
import warnings as warn
nobs = retain.shape[0]
ndropped = nobs - retain.sum()
warn.warn(
f"{ndropped} singleton observations dropped",
SingletonWarning,
stacklevel=3,
)
drop = ~retain
self._singleton_index = cast(BoolArray, drop)
self.dependent.drop(drop)
self.exog.drop(drop)
self.weights.drop(drop)
if self.other_effects:
assert self._other_effect_cats is not None
self._other_effect_cats.drop(drop)
# Reverify exog matrix
self._check_exog_rank()
def __str__(self) -> str:
out = super().__str__()
additional = (
"\nEntity Effects: {ee}, Time Effects: {te}, Num Other Effects: {oe}"
)
oe = 0
if self.other_effects:
assert self._other_effect_cats is not None
oe = self._other_effect_cats.nvar
additional = additional.format(
ee=self.entity_effects, te=self.time_effects, oe=oe
)
out += additional
return out
def _validate_effects(self, effects: PanelDataLike | None) -> bool:
"""Check model effects"""
if effects is None:
return False
effects = PanelData(effects, var_name="OtherEffect", convert_dummies=False)
if effects.shape[1:] != self._original_shape[1:]:
raise ValueError(
"other_effects must have the same number of "
"entities and time periods as dependent."
)
num_effects = effects.nvar
if num_effects + self.entity_effects + self.time_effects > 2:
raise ValueError("At most two effects supported.")
cats = {}
effects_frame = effects.dataframe
for col in effects_frame:
cat = Categorical(effects_frame[col])
# TODO: Bug in pandas-stube
# https://github.com/pandas-dev/pandas-stubs/issues/111
cats[col] = cat.codes.astype(np.int64) # type: ignore
cats_df = DataFrame(cats, index=effects_frame.index)
cats_df = cats_df[effects_frame.columns]
other_effects = PanelData(cats_df)
other_effects.drop(~self.not_null)
self._other_effect_cats = other_effects
cats_array = other_effects.values2d
nested = False
nesting_effect = ""
if cats_array.shape[1] == 2:
nested = self._is_effect_nested(cats_array[:, [0]], cats_array[:, [1]])
nested |= self._is_effect_nested(cats_array[:, [1]], cats_array[:, [0]])
nesting_effect = "other effects"
elif self.entity_effects:
nested = self._is_effect_nested(
cats_array[:, [0]], self.dependent.entity_ids
)
nested |= self._is_effect_nested(
self.dependent.entity_ids, cats_array[:, [0]]
)
nesting_effect = "entity effects"
elif self.time_effects:
nested = self._is_effect_nested(cats_array[:, [0]], self.dependent.time_ids)
nested |= self._is_effect_nested(
self.dependent.time_ids, cats_array[:, [0]]
)
nesting_effect = "time effects"
if nested:
raise ValueError(
"Included other effects nest or are nested "
"by {effect}".format(effect=nesting_effect)
)
return True
@property
def entity_effects(self) -> bool:
"""Flag indicating whether entity effects are included"""
return self._entity_effects
@property
def time_effects(self) -> bool:
"""Flag indicating whether time effects are included"""
return self._time_effects
@property
def other_effects(self) -> bool:
"""Flag indicating whether other (generic) effects are included"""
return self._other_effects
@classmethod
def from_formula(
cls,
formula: str,
data: PanelDataLike,
*,
weights: PanelDataLike | None = None,
other_effects: PanelDataLike | None = None,
singletons: bool = True,
drop_absorbed: bool = False,
check_rank: bool = True,
) -> PanelOLS:
"""
Create a model from a formula
Parameters
----------
formula : str
Formula to transform into model. Conforms to formulaic formula
rules with two special variable names, EntityEffects and
TimeEffects which can be used to specify that the model should
contain an entity effect or a time effect, respectively. See
Examples.
data : array_like
Data structure that can be coerced into a PanelData. In most
cases, this should be a multi-index DataFrame where the level 0
index contains the entities and the level 1 contains the time.
weights: array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
other_effects : array_like
Category codes to use for any effects that are not entity or time
effects. Each variable is treated as an effect.
singletons : bool
Flag indicating whether to drop singleton observation
drop_absorbed : bool
Flag indicating whether to drop absorbed variables
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model
specification. Results may be numerically unstable if this check
is skipped and the matrix is not full rank.
Returns
-------
PanelOLS
Model specified using the formula
Examples
--------
>>> from linearmodels import PanelOLS
>>> from linearmodels.panel import generate_panel_data
>>> panel_data = generate_panel_data()
>>> mod = PanelOLS.from_formula("y ~ 1 + x1 + EntityEffects", panel_data.data)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
"""
parser = PanelFormulaParser(formula, data, context=capture_context(1))
entity_effect = parser.entity_effect
time_effect = parser.time_effect
dependent, exog = parser.data
mod = cls(
dependent,
exog,
entity_effects=entity_effect,
time_effects=time_effect,
weights=weights,
other_effects=other_effects,
singletons=singletons,
drop_absorbed=drop_absorbed,
check_rank=check_rank,
)
mod.formula = formula
return mod
def _lsmr_path(
self,
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array, Float64Array]:
"""Sparse implementation, works for all scenarios"""
y = cast(Float64Array, self.dependent.values2d)
x = cast(Float64Array, self.exog.values2d)
w = cast(Float64Array, self.weights.values2d)
root_w = np.sqrt(w)
wybar = root_w * (w.T @ y / w.sum())
wy = root_w * y
wx = root_w * x
if not self._has_effect:
y_effect, x_effect = np.zeros_like(wy), np.zeros_like(wx)
return wy, wx, wybar, y_effect, x_effect
wy_gm = wybar
wx_gm = root_w * (w.T @ x / w.sum())
root_w_sparse = csc_matrix(root_w)
cats_l: list[IntArray | Float64Array] = []
if self.entity_effects:
cats_l.append(self.dependent.entity_ids)
if self.time_effects:
cats_l.append(self.dependent.time_ids)
if self.other_effects:
assert self._other_effect_cats is not None
cats_l.append(self._other_effect_cats.values2d)
cats = np.concatenate(cats_l, 1)
wd, cond = dummy_matrix(cats, precondition=True)
assert isinstance(wd, csc_matrix)
if self._is_weighted:
wd = wd.multiply(root_w_sparse)
wx_mean_l = []
for i in range(x.shape[1]):
cond_mean = lsmr(wd, wx[:, i], atol=1e-8, btol=1e-8)[0]
cond_mean /= cond
wx_mean_l.append(cond_mean)
wx_mean = np.column_stack(wx_mean_l)
wy_mean = lsmr(wd, wy, atol=1e-8, btol=1e-8)[0]
wy_mean /= cond
wy_mean = wy_mean[:, None]
wx_mean = csc_matrix(wx_mean)
wy_mean = csc_matrix(wy_mean)
# Purge fitted, weighted values
sp_cond = diags(cond, format="csc")
wx = wx - (wd @ sp_cond @ wx_mean).A
wy = wy - (wd @ sp_cond @ wy_mean).A
if self.has_constant:
wy += wy_gm
wx += wx_gm
else:
wybar = 0
y_effects = y - wy / root_w
x_effects = x - wx / root_w
return wy, wx, wybar, y_effects, x_effects
def _slow_path(
self,
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array, Float64Array]:
"""Frisch-Waugh-Lovell implementation, works for all scenarios"""
w = cast(Float64Array, self.weights.values2d)
root_w = np.sqrt(w)
y = root_w * cast(Float64Array, self.dependent.values2d)
x = root_w * cast(Float64Array, self.exog.values2d)
if not self._has_effect:
ybar = root_w @ _lstsq(root_w, y, rcond=None)[0]
y_effect, x_effect = np.zeros_like(y), np.zeros_like(x)
return y, x, ybar, y_effect, x_effect
drop_first = self._constant
d_l = []
if self.entity_effects:
d_l.append(self.dependent.dummies("entity", drop_first=drop_first).values)
drop_first = True
if self.time_effects:
d_l.append(self.dependent.dummies("time", drop_first=drop_first).values)
drop_first = True
if self.other_effects:
assert self._other_effect_cats is not None
oe = self._other_effect_cats.dataframe
for c in oe:
dummies = get_dummies(oe[c], drop_first=drop_first).astype(np.float64)
d_l.append(dummies.values)
drop_first = True
d = np.column_stack(d_l)
wd = root_w * d
if self.has_constant:
wd -= root_w * (w.T @ d / w.sum())
z = np.ones_like(root_w)
d -= z * (z.T @ d / z.sum())
x_mean = _lstsq(wd, x, rcond=None)[0]
y_mean = _lstsq(wd, y, rcond=None)[0]
# Save fitted unweighted effects to use in eps calculation
x_effects = d @ x_mean
y_effects = d @ y_mean
# Purge fitted, weighted values
x = x - wd @ x_mean
y = y - wd @ y_mean
ybar = root_w @ _lstsq(root_w, y, rcond=None)[0]
return y, x, ybar, y_effects, x_effects
def _choose_twoway_algo(self) -> bool:
if not (self.entity_effects and self.time_effects):
return False
nentity, nobs = self.dependent.nentity, self.dependent.nobs
nreg = min(nentity, nobs)
if nreg < self.exog.shape[1]:
return False
# MiB
reg_size = 8 * nentity * nobs * nreg // 2**20
low_memory = reg_size > 2**10
if low_memory:
import warnings
warnings.warn(
"Using low-memory algorithm to estimate two-way model. Explicitly set "
"low_memory=True to silence this message. Set low_memory=False to use "
"the standard algorithm that creates dummy variables for the smaller "
"of the number of entities or number of time periods.",
MemoryWarning,
stacklevel=3,
)
return low_memory
def _fast_path(
self, low_memory: bool
) -> tuple[Float64Array, Float64Array, Float64Array]:
"""Dummy-variable free estimation without weights"""
_y = self.dependent.values2d
_x = self.exog.values2d
ybar = np.asarray(_y.mean(0))
if not self._has_effect:
return _y, _x, ybar
y_gm = ybar
x_gm = _x.mean(0)
y = self.dependent
x = self.exog
if self.other_effects:
assert self._other_effect_cats is not None
groups = self._other_effect_cats
if self.entity_effects or self.time_effects:
groups = groups.copy()
if self.entity_effects:
effect = self.dependent.entity_ids
else:
effect = self.dependent.time_ids
col = ensure_unique_column("additional.effect", groups.dataframe)
groups.dataframe[col] = effect
y = cast(PanelData, y.general_demean(groups))
x = cast(PanelData, x.general_demean(groups))
elif self.entity_effects and self.time_effects:
y = cast(PanelData, y.demean("both", low_memory=low_memory))
x = cast(PanelData, x.demean("both", low_memory=low_memory))
elif self.entity_effects:
y = cast(PanelData, y.demean("entity"))
x = cast(PanelData, x.demean("entity"))
else: # self.time_effects
y = cast(PanelData, y.demean("time"))
x = cast(PanelData, x.demean("time"))
y_arr = y.values2d
x_arr = x.values2d
if self.has_constant:
y_arr = y_arr + y_gm
x_arr = x_arr + x_gm
else:
ybar = np.asarray(0.0)
return y_arr, x_arr, ybar
def _weighted_fast_path(
self, low_memory: bool
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array, Float64Array]:
"""Dummy-variable free estimation with weights"""
y_arr = self.dependent.values2d
x_arr = self.exog.values2d
w = self.weights.values2d
root_w = cast(Float64Array, np.sqrt(w))
wybar = root_w * (w.T @ y_arr / w.sum())
if not self._has_effect:
wy_arr = root_w * self.dependent.values2d
wx_arr = root_w * self.exog.values2d
y_effect, x_effect = np.zeros_like(wy_arr), np.zeros_like(wx_arr)
return wy_arr, wx_arr, wybar, y_effect, x_effect
wy_gm = wybar
wx_gm = root_w * (w.T @ x_arr / w.sum())
y = self.dependent
x = self.exog
if self.other_effects:
assert self._other_effect_cats is not None
groups = self._other_effect_cats
if self.entity_effects or self.time_effects:
groups = groups.copy()
if self.entity_effects:
effect = self.dependent.entity_ids
else:
effect = self.dependent.time_ids
col = ensure_unique_column("additional.effect", groups.dataframe)
groups.dataframe[col] = effect
wy = y.general_demean(groups, weights=self.weights)
wx = x.general_demean(groups, weights=self.weights)
elif self.entity_effects and self.time_effects:
wy = cast(
PanelData, y.demean("both", weights=self.weights, low_memory=low_memory)
)
wx = cast(
PanelData, x.demean("both", weights=self.weights, low_memory=low_memory)
)
elif self.entity_effects:
wy = cast(PanelData, y.demean("entity", weights=self.weights))
wx = cast(PanelData, x.demean("entity", weights=self.weights))
else: # self.time_effects
wy = cast(PanelData, y.demean("time", weights=self.weights))
wx = cast(PanelData, x.demean("time", weights=self.weights))
wy_arr = wy.values2d
wx_arr = wx.values2d
if self.has_constant:
wy_arr += wy_gm
wx_arr += wx_gm
else:
wybar = 0
wy_effects = y.values2d - wy_arr / root_w
wx_effects = x.values2d - wx_arr / root_w
return wy_arr, wx_arr, wybar, wy_effects, wx_effects
def _info(self) -> tuple[Series, Series, DataFrame | None]:
"""Information about model effects and panel structure"""
entity_info, time_info, other_info = super()._info()
if self.other_effects:
other_info_values: list[Series] = []
assert self._other_effect_cats is not None
oe = self._other_effect_cats.dataframe
for c in oe:
name = "Observations per group (" + str(c) + ")"
other_info_values.append(
panel_structure_stats(oe[c].values.astype(np.int32), name)
)
other_info = DataFrame(other_info_values)
return entity_info, time_info, other_info
@staticmethod
def _is_effect_nested(effects: NumericArray, clusters: NumericArray) -> bool:
"""Determine whether an effect is nested by the covariance clusters"""
is_nested = np.zeros(effects.shape[1], dtype=bool)
for i, e in enumerate(effects.T):
e = (e - e.min()).astype(np.int64)
e_count = len(np.unique(e))
for c in clusters.T:
c = (c - c.min()).astype(np.int64)
cmax = c.max()
ec = e * (cmax + 1) + c
is_nested[i] = len(np.unique(ec)) == e_count
return bool(np.all(is_nested))
def _determine_df_adjustment(
self,
cov_type: str,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> bool:
if cov_type != "clustered" or not self._has_effect:
return True
num_effects = self.entity_effects + self.time_effects
if self.other_effects:
assert self._other_effect_cats is not None
num_effects += self._other_effect_cats.shape[1]
clusters = cov_config.get("clusters", None)
if clusters is None: # No clusters
return True
effects = self._collect_effects()
if num_effects == 1:
return not self._is_effect_nested(effects, cast(IntArray, clusters))
return True # Default case for 2-way -- not completely clear
def fit(
self,
*,
use_lsdv: bool = False,
use_lsmr: bool = False,
low_memory: bool | None = None,
cov_type: str = "unadjusted",
debiased: bool = True,
auto_df: bool = True,
count_effects: bool = True,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> PanelEffectsResults:
"""
Estimate model parameters
Parameters
----------
use_lsdv : bool
Flag indicating to use the Least Squares Dummy Variable estimator
to eliminate effects. The default value uses only means and does
note require constructing dummy variables for each effect.
use_lsmr : bool
Flag indicating to use LSDV with the Sparse Equations and Least
Squares estimator to eliminate the fixed effects.
low_memory : {bool, None}
Flag indicating whether to use a low-memory algorithm when a model
contains two-way fixed effects. If `None`, the choice is taken
automatically, and the low memory algorithm is used if the
required dummy variable array is both larger than then array of
regressors in the model and requires more than 1 GiB .
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
auto_df : bool
Flag indicating that the treatment of estimated effects in degree
of freedom adjustment is automatically handled. This is useful
since clustered standard errors that are clustered using the same
variable as an effect do not require degree of freedom correction
while other estimators such as the unadjusted covariance do.
count_effects : bool
Flag indicating that the covariance estimator should be adjusted
to account for the estimation of effects in the model. Only used
if ``auto_df=False``.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelEffectsResults
Estimation results
Examples
--------
>>> from linearmodels import PanelOLS
>>> mod = PanelOLS(y, x, entity_effects=True)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Three covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer valued, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
"""
weighted = np.any(self.weights.values2d != 1.0)
if use_lsmr:
y, x, ybar, y_effects, x_effects = self._lsmr_path()
elif use_lsdv:
y, x, ybar, y_effects, x_effects = self._slow_path()
else:
low_memory = (
self._choose_twoway_algo() if low_memory is None else low_memory
)
if not weighted:
y, x, ybar = self._fast_path(low_memory=low_memory)
y_effects = np.array([0.0])
x_effects = np.zeros(x.shape)
else:
y, x, ybar, y_effects, x_effects = self._weighted_fast_path(
low_memory=low_memory
)
neffects = 0
drop_first = self.has_constant
if self.entity_effects:
neffects += self.dependent.nentity - drop_first
drop_first = True
if self.time_effects:
neffects += self.dependent.nobs - drop_first
drop_first = True
if self.other_effects:
assert self._other_effect_cats is not None
oe = self._other_effect_cats.dataframe
for c in oe:
neffects += oe[c].nunique() - drop_first
drop_first = True
if self.entity_effects or self.time_effects or self.other_effects:
if not self._drop_absorbed:
check_absorbed(x, [str(var) for var in self.exog.vars])
else:
# TODO: Need to special case the constant here when determining which
# to retain since we always want to retain the constant if present
retain = not_absorbed(x, self._constant, self._constant_index)
if not retain:
raise ValueError(
"All columns in exog have been fully absorbed by the included"
" effects. This model cannot be estimated."
)
if len(retain) != x.shape[1]:
drop = set(range(x.shape[1])).difference(retain)
dropped = ", ".join([str(self.exog.vars[i]) for i in drop])
import warnings
warnings.warn(
absorbing_warn_msg.format(absorbed_variables=dropped),
AbsorbingEffectWarning,
stacklevel=2,
)
x = x[:, retain]
# Update constant index loc
if self._constant:
assert isinstance(self._constant_index, int)
self._constant_index = int(
np.squeeze(
np.argwhere(np.array(retain) == self._constant_index)
)
)
# Adjust exog
self.exog = PanelData(self.exog.dataframe.iloc[:, retain])
x_effects = x_effects[:, retain]
params = _lstsq(x, y, rcond=None)[0]
nobs = self.dependent.dataframe.shape[0]
df_model = x.shape[1] + neffects
df_resid = nobs - df_model
# Check clusters if singletons were removed
cov_config = self._setup_clusters(cov_config)
if auto_df:
count_effects = self._determine_df_adjustment(cov_type, **cov_config)
extra_df = neffects if count_effects else 0
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
y,
x,
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = y - x @ params
eps = weps
_y = self.dependent.values2d
_x = self.exog.values2d
if weighted:
eps = (_y - y_effects) - (_x - x_effects) @ params
if self.has_constant:
# Correction since y_effects and x_effects @ params add mean
w = self.weights.values2d
eps -= (w * eps).sum() / w.sum()
index = self.dependent.index
fitted = DataFrame(_x @ params, index, ["fitted_values"])
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
eps_effects = _y - fitted.values
sigma2_tot = float(np.squeeze(eps_effects.T @ eps_effects) / nobs)
sigma2_eps = float(np.squeeze(eps.T @ eps) / nobs)
sigma2_effects = sigma2_tot - sigma2_eps
rho = sigma2_effects / sigma2_tot if sigma2_tot > 0.0 else 0.0
resid_ss = float(np.squeeze(weps.T @ weps))
if self.has_constant:
mu = ybar
else:
mu = np.array([0.0])
total_ss = float(np.squeeze((y - mu).T @ (y - mu)))
r2 = 1 - resid_ss / total_ss if total_ss > 0.0 else 0.0
root_w = cast(Float64Array, np.sqrt(self.weights.values2d))
y_ex = root_w * self.dependent.values2d
mu_ex = 0
if (
self.has_constant
or self.entity_effects
or self.time_effects
or self.other_effects
):
mu_ex = root_w * ((root_w.T @ y_ex) / (root_w.T @ root_w))
total_ss_ex_effect = float(np.squeeze((y_ex - mu_ex).T @ (y_ex - mu_ex)))
r2_ex_effects = (
1 - resid_ss / total_ss_ex_effect if total_ss_ex_effect > 0.0 else 0.0
)
res = self._postestimation(params, cov, debiased, df_resid, weps, y, x, root_w)
######################################
# Pooled f-stat
######################################
if self.entity_effects or self.time_effects or self.other_effects:
wy, wx = root_w * self.dependent.values2d, root_w * self.exog.values2d
df_num, df_denom = (df_model - wx.shape[1]), df_resid
if not self.has_constant:
# Correction for when models does not have explicit constant
wy -= root_w * _lstsq(root_w, wy, rcond=None)[0]
wx -= root_w * _lstsq(root_w, wx, rcond=None)[0]
df_num -= 1
weps_pooled = wy - wx @ _lstsq(wx, wy, rcond=None)[0]
resid_ss_pooled = float(np.squeeze(weps_pooled.T @ weps_pooled))
num = (resid_ss_pooled - resid_ss) / df_num
denom = resid_ss / df_denom
stat = num / denom
f_pooled = WaldTestStatistic(
stat,
"Effects are zero",
df_num,
df_denom=df_denom,
name="Pooled F-statistic",
)
res.update(f_pooled=f_pooled)
effects = DataFrame(
eps_effects - eps,
columns=["estimated_effects"],
index=self.dependent.index,
)
else:
effects = DataFrame(
np.zeros_like(eps),
columns=["estimated_effects"],
index=self.dependent.index,
)
res.update(
dict(
df_resid=df_resid,
df_model=df_model,
nobs=y.shape[0],
residual_ss=resid_ss,
total_ss=total_ss,
wresids=weps,
resids=eps,
r2=r2,
entity_effects=self.entity_effects,
time_effects=self.time_effects,
other_effects=self.other_effects,
sigma2_eps=sigma2_eps,
sigma2_effects=sigma2_effects,
rho=rho,
r2_ex_effects=r2_ex_effects,
effects=effects,
fitted=fitted,
idiosyncratic=idiosyncratic,
)
)
return PanelEffectsResults(res)
|
(dependent: 'PanelDataLike', exog: 'PanelDataLike', *, weights: 'PanelDataLike | None' = None, entity_effects: 'bool' = False, time_effects: 'bool' = False, other_effects: 'PanelDataLike | None' = None, singletons: 'bool' = True, drop_absorbed: 'bool' = False, check_rank: 'bool' = True) -> 'None'
|
43,011 |
linearmodels.panel.model
|
__init__
| null |
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: PanelDataLike | None = None,
entity_effects: bool = False,
time_effects: bool = False,
other_effects: PanelDataLike | None = None,
singletons: bool = True,
drop_absorbed: bool = False,
check_rank: bool = True,
) -> None:
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
self._entity_effects = entity_effects
self._time_effects = time_effects
self._other_effect_cats: PanelData | None = None
self._singletons = singletons
self._other_effects = self._validate_effects(other_effects)
self._has_effect = entity_effects or time_effects or self.other_effects
self._drop_absorbed = drop_absorbed
self._singleton_index = None
self._drop_singletons()
|
(self, dependent: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], exog: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, weights: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, entity_effects: bool = False, time_effects: bool = False, other_effects: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, singletons: bool = True, drop_absorbed: bool = False, check_rank: bool = True) -> NoneType
|
43,013 |
linearmodels.panel.model
|
__str__
| null |
def __str__(self) -> str:
out = super().__str__()
additional = (
"\nEntity Effects: {ee}, Time Effects: {te}, Num Other Effects: {oe}"
)
oe = 0
if self.other_effects:
assert self._other_effect_cats is not None
oe = self._other_effect_cats.nvar
additional = additional.format(
ee=self.entity_effects, te=self.time_effects, oe=oe
)
out += additional
return out
|
(self) -> str
|
43,016 |
linearmodels.panel.model
|
_choose_twoway_algo
| null |
def _choose_twoway_algo(self) -> bool:
if not (self.entity_effects and self.time_effects):
return False
nentity, nobs = self.dependent.nentity, self.dependent.nobs
nreg = min(nentity, nobs)
if nreg < self.exog.shape[1]:
return False
# MiB
reg_size = 8 * nentity * nobs * nreg // 2**20
low_memory = reg_size > 2**10
if low_memory:
import warnings
warnings.warn(
"Using low-memory algorithm to estimate two-way model. Explicitly set "
"low_memory=True to silence this message. Set low_memory=False to use "
"the standard algorithm that creates dummy variables for the smaller "
"of the number of entities or number of time periods.",
MemoryWarning,
stacklevel=3,
)
return low_memory
|
(self) -> bool
|
43,017 |
linearmodels.panel.model
|
_collect_effects
| null |
def _collect_effects(self) -> NumericArray:
if not self._has_effect:
return np.empty((self.dependent.shape[0], 0))
effects = []
if self.entity_effects:
effects.append(np.asarray(self.dependent.entity_ids).squeeze())
if self.time_effects:
effects.append(np.asarray(self.dependent.time_ids).squeeze())
if self.other_effects:
assert self._other_effect_cats is not None
other = self._other_effect_cats.dataframe
for col in other:
effects.append(np.asarray(other[col]).squeeze())
return np.column_stack(effects)
|
(self) -> numpy.ndarray
|
43,018 |
linearmodels.panel.model
|
_determine_df_adjustment
| null |
def _determine_df_adjustment(
self,
cov_type: str,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> bool:
if cov_type != "clustered" or not self._has_effect:
return True
num_effects = self.entity_effects + self.time_effects
if self.other_effects:
assert self._other_effect_cats is not None
num_effects += self._other_effect_cats.shape[1]
clusters = cov_config.get("clusters", None)
if clusters is None: # No clusters
return True
effects = self._collect_effects()
if num_effects == 1:
return not self._is_effect_nested(effects, cast(IntArray, clusters))
return True # Default case for 2-way -- not completely clear
|
(self, cov_type: str, **cov_config: bool | float | str | numpy.ndarray | pandas.core.frame.DataFrame | linearmodels.panel.data.PanelData) -> bool
|
43,019 |
linearmodels.panel.model
|
_drop_singletons
| null |
def _drop_singletons(self) -> None:
if self._singletons or not self._has_effect:
return
effects = self._collect_effects()
retain = in_2core_graph(effects)
if np.all(retain):
return
import warnings as warn
nobs = retain.shape[0]
ndropped = nobs - retain.sum()
warn.warn(
f"{ndropped} singleton observations dropped",
SingletonWarning,
stacklevel=3,
)
drop = ~retain
self._singleton_index = cast(BoolArray, drop)
self.dependent.drop(drop)
self.exog.drop(drop)
self.weights.drop(drop)
if self.other_effects:
assert self._other_effect_cats is not None
self._other_effect_cats.drop(drop)
# Reverify exog matrix
self._check_exog_rank()
|
(self) -> NoneType
|
43,022 |
linearmodels.panel.model
|
_fast_path
|
Dummy-variable free estimation without weights
|
def _fast_path(
self, low_memory: bool
) -> tuple[Float64Array, Float64Array, Float64Array]:
"""Dummy-variable free estimation without weights"""
_y = self.dependent.values2d
_x = self.exog.values2d
ybar = np.asarray(_y.mean(0))
if not self._has_effect:
return _y, _x, ybar
y_gm = ybar
x_gm = _x.mean(0)
y = self.dependent
x = self.exog
if self.other_effects:
assert self._other_effect_cats is not None
groups = self._other_effect_cats
if self.entity_effects or self.time_effects:
groups = groups.copy()
if self.entity_effects:
effect = self.dependent.entity_ids
else:
effect = self.dependent.time_ids
col = ensure_unique_column("additional.effect", groups.dataframe)
groups.dataframe[col] = effect
y = cast(PanelData, y.general_demean(groups))
x = cast(PanelData, x.general_demean(groups))
elif self.entity_effects and self.time_effects:
y = cast(PanelData, y.demean("both", low_memory=low_memory))
x = cast(PanelData, x.demean("both", low_memory=low_memory))
elif self.entity_effects:
y = cast(PanelData, y.demean("entity"))
x = cast(PanelData, x.demean("entity"))
else: # self.time_effects
y = cast(PanelData, y.demean("time"))
x = cast(PanelData, x.demean("time"))
y_arr = y.values2d
x_arr = x.values2d
if self.has_constant:
y_arr = y_arr + y_gm
x_arr = x_arr + x_gm
else:
ybar = np.asarray(0.0)
return y_arr, x_arr, ybar
|
(self, low_memory: bool) -> tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray]
|
43,023 |
linearmodels.panel.model
|
_info
|
Information about model effects and panel structure
|
def _info(self) -> tuple[Series, Series, DataFrame | None]:
"""Information about model effects and panel structure"""
entity_info, time_info, other_info = super()._info()
if self.other_effects:
other_info_values: list[Series] = []
assert self._other_effect_cats is not None
oe = self._other_effect_cats.dataframe
for c in oe:
name = "Observations per group (" + str(c) + ")"
other_info_values.append(
panel_structure_stats(oe[c].values.astype(np.int32), name)
)
other_info = DataFrame(other_info_values)
return entity_info, time_info, other_info
|
(self) -> tuple[pandas.core.series.Series, pandas.core.series.Series, pandas.core.frame.DataFrame | None]
|
43,024 |
linearmodels.panel.model
|
_is_effect_nested
|
Determine whether an effect is nested by the covariance clusters
|
@staticmethod
def _is_effect_nested(effects: NumericArray, clusters: NumericArray) -> bool:
"""Determine whether an effect is nested by the covariance clusters"""
is_nested = np.zeros(effects.shape[1], dtype=bool)
for i, e in enumerate(effects.T):
e = (e - e.min()).astype(np.int64)
e_count = len(np.unique(e))
for c in clusters.T:
c = (c - c.min()).astype(np.int64)
cmax = c.max()
ec = e * (cmax + 1) + c
is_nested[i] = len(np.unique(ec)) == e_count
return bool(np.all(is_nested))
|
(effects: numpy.ndarray, clusters: numpy.ndarray) -> bool
|
43,025 |
linearmodels.panel.model
|
_lsmr_path
|
Sparse implementation, works for all scenarios
|
def _lsmr_path(
self,
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array, Float64Array]:
"""Sparse implementation, works for all scenarios"""
y = cast(Float64Array, self.dependent.values2d)
x = cast(Float64Array, self.exog.values2d)
w = cast(Float64Array, self.weights.values2d)
root_w = np.sqrt(w)
wybar = root_w * (w.T @ y / w.sum())
wy = root_w * y
wx = root_w * x
if not self._has_effect:
y_effect, x_effect = np.zeros_like(wy), np.zeros_like(wx)
return wy, wx, wybar, y_effect, x_effect
wy_gm = wybar
wx_gm = root_w * (w.T @ x / w.sum())
root_w_sparse = csc_matrix(root_w)
cats_l: list[IntArray | Float64Array] = []
if self.entity_effects:
cats_l.append(self.dependent.entity_ids)
if self.time_effects:
cats_l.append(self.dependent.time_ids)
if self.other_effects:
assert self._other_effect_cats is not None
cats_l.append(self._other_effect_cats.values2d)
cats = np.concatenate(cats_l, 1)
wd, cond = dummy_matrix(cats, precondition=True)
assert isinstance(wd, csc_matrix)
if self._is_weighted:
wd = wd.multiply(root_w_sparse)
wx_mean_l = []
for i in range(x.shape[1]):
cond_mean = lsmr(wd, wx[:, i], atol=1e-8, btol=1e-8)[0]
cond_mean /= cond
wx_mean_l.append(cond_mean)
wx_mean = np.column_stack(wx_mean_l)
wy_mean = lsmr(wd, wy, atol=1e-8, btol=1e-8)[0]
wy_mean /= cond
wy_mean = wy_mean[:, None]
wx_mean = csc_matrix(wx_mean)
wy_mean = csc_matrix(wy_mean)
# Purge fitted, weighted values
sp_cond = diags(cond, format="csc")
wx = wx - (wd @ sp_cond @ wx_mean).A
wy = wy - (wd @ sp_cond @ wy_mean).A
if self.has_constant:
wy += wy_gm
wx += wx_gm
else:
wybar = 0
y_effects = y - wy / root_w
x_effects = x - wx / root_w
return wy, wx, wybar, y_effects, x_effects
|
(self) -> tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray]
|
43,031 |
linearmodels.panel.model
|
_slow_path
|
Frisch-Waugh-Lovell implementation, works for all scenarios
|
def _slow_path(
self,
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array, Float64Array]:
"""Frisch-Waugh-Lovell implementation, works for all scenarios"""
w = cast(Float64Array, self.weights.values2d)
root_w = np.sqrt(w)
y = root_w * cast(Float64Array, self.dependent.values2d)
x = root_w * cast(Float64Array, self.exog.values2d)
if not self._has_effect:
ybar = root_w @ _lstsq(root_w, y, rcond=None)[0]
y_effect, x_effect = np.zeros_like(y), np.zeros_like(x)
return y, x, ybar, y_effect, x_effect
drop_first = self._constant
d_l = []
if self.entity_effects:
d_l.append(self.dependent.dummies("entity", drop_first=drop_first).values)
drop_first = True
if self.time_effects:
d_l.append(self.dependent.dummies("time", drop_first=drop_first).values)
drop_first = True
if self.other_effects:
assert self._other_effect_cats is not None
oe = self._other_effect_cats.dataframe
for c in oe:
dummies = get_dummies(oe[c], drop_first=drop_first).astype(np.float64)
d_l.append(dummies.values)
drop_first = True
d = np.column_stack(d_l)
wd = root_w * d
if self.has_constant:
wd -= root_w * (w.T @ d / w.sum())
z = np.ones_like(root_w)
d -= z * (z.T @ d / z.sum())
x_mean = _lstsq(wd, x, rcond=None)[0]
y_mean = _lstsq(wd, y, rcond=None)[0]
# Save fitted unweighted effects to use in eps calculation
x_effects = d @ x_mean
y_effects = d @ y_mean
# Purge fitted, weighted values
x = x - wd @ x_mean
y = y - wd @ y_mean
ybar = root_w @ _lstsq(root_w, y, rcond=None)[0]
return y, x, ybar, y_effects, x_effects
|
(self) -> tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray]
|
43,033 |
linearmodels.panel.model
|
_validate_effects
|
Check model effects
|
def _validate_effects(self, effects: PanelDataLike | None) -> bool:
"""Check model effects"""
if effects is None:
return False
effects = PanelData(effects, var_name="OtherEffect", convert_dummies=False)
if effects.shape[1:] != self._original_shape[1:]:
raise ValueError(
"other_effects must have the same number of "
"entities and time periods as dependent."
)
num_effects = effects.nvar
if num_effects + self.entity_effects + self.time_effects > 2:
raise ValueError("At most two effects supported.")
cats = {}
effects_frame = effects.dataframe
for col in effects_frame:
cat = Categorical(effects_frame[col])
# TODO: Bug in pandas-stube
# https://github.com/pandas-dev/pandas-stubs/issues/111
cats[col] = cat.codes.astype(np.int64) # type: ignore
cats_df = DataFrame(cats, index=effects_frame.index)
cats_df = cats_df[effects_frame.columns]
other_effects = PanelData(cats_df)
other_effects.drop(~self.not_null)
self._other_effect_cats = other_effects
cats_array = other_effects.values2d
nested = False
nesting_effect = ""
if cats_array.shape[1] == 2:
nested = self._is_effect_nested(cats_array[:, [0]], cats_array[:, [1]])
nested |= self._is_effect_nested(cats_array[:, [1]], cats_array[:, [0]])
nesting_effect = "other effects"
elif self.entity_effects:
nested = self._is_effect_nested(
cats_array[:, [0]], self.dependent.entity_ids
)
nested |= self._is_effect_nested(
self.dependent.entity_ids, cats_array[:, [0]]
)
nesting_effect = "entity effects"
elif self.time_effects:
nested = self._is_effect_nested(cats_array[:, [0]], self.dependent.time_ids)
nested |= self._is_effect_nested(
self.dependent.time_ids, cats_array[:, [0]]
)
nesting_effect = "time effects"
if nested:
raise ValueError(
"Included other effects nest or are nested "
"by {effect}".format(effect=nesting_effect)
)
return True
|
(self, effects: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]) -> bool
|
43,034 |
linearmodels.panel.model
|
_weighted_fast_path
|
Dummy-variable free estimation with weights
|
def _weighted_fast_path(
self, low_memory: bool
) -> tuple[Float64Array, Float64Array, Float64Array, Float64Array, Float64Array]:
"""Dummy-variable free estimation with weights"""
y_arr = self.dependent.values2d
x_arr = self.exog.values2d
w = self.weights.values2d
root_w = cast(Float64Array, np.sqrt(w))
wybar = root_w * (w.T @ y_arr / w.sum())
if not self._has_effect:
wy_arr = root_w * self.dependent.values2d
wx_arr = root_w * self.exog.values2d
y_effect, x_effect = np.zeros_like(wy_arr), np.zeros_like(wx_arr)
return wy_arr, wx_arr, wybar, y_effect, x_effect
wy_gm = wybar
wx_gm = root_w * (w.T @ x_arr / w.sum())
y = self.dependent
x = self.exog
if self.other_effects:
assert self._other_effect_cats is not None
groups = self._other_effect_cats
if self.entity_effects or self.time_effects:
groups = groups.copy()
if self.entity_effects:
effect = self.dependent.entity_ids
else:
effect = self.dependent.time_ids
col = ensure_unique_column("additional.effect", groups.dataframe)
groups.dataframe[col] = effect
wy = y.general_demean(groups, weights=self.weights)
wx = x.general_demean(groups, weights=self.weights)
elif self.entity_effects and self.time_effects:
wy = cast(
PanelData, y.demean("both", weights=self.weights, low_memory=low_memory)
)
wx = cast(
PanelData, x.demean("both", weights=self.weights, low_memory=low_memory)
)
elif self.entity_effects:
wy = cast(PanelData, y.demean("entity", weights=self.weights))
wx = cast(PanelData, x.demean("entity", weights=self.weights))
else: # self.time_effects
wy = cast(PanelData, y.demean("time", weights=self.weights))
wx = cast(PanelData, x.demean("time", weights=self.weights))
wy_arr = wy.values2d
wx_arr = wx.values2d
if self.has_constant:
wy_arr += wy_gm
wx_arr += wx_gm
else:
wybar = 0
wy_effects = y.values2d - wy_arr / root_w
wx_effects = x.values2d - wx_arr / root_w
return wy_arr, wx_arr, wybar, wy_effects, wx_effects
|
(self, low_memory: bool) -> tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray]
|
43,035 |
linearmodels.panel.model
|
fit
|
Estimate model parameters
Parameters
----------
use_lsdv : bool
Flag indicating to use the Least Squares Dummy Variable estimator
to eliminate effects. The default value uses only means and does
note require constructing dummy variables for each effect.
use_lsmr : bool
Flag indicating to use LSDV with the Sparse Equations and Least
Squares estimator to eliminate the fixed effects.
low_memory : {bool, None}
Flag indicating whether to use a low-memory algorithm when a model
contains two-way fixed effects. If `None`, the choice is taken
automatically, and the low memory algorithm is used if the
required dummy variable array is both larger than then array of
regressors in the model and requires more than 1 GiB .
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
auto_df : bool
Flag indicating that the treatment of estimated effects in degree
of freedom adjustment is automatically handled. This is useful
since clustered standard errors that are clustered using the same
variable as an effect do not require degree of freedom correction
while other estimators such as the unadjusted covariance do.
count_effects : bool
Flag indicating that the covariance estimator should be adjusted
to account for the estimation of effects in the model. Only used
if ``auto_df=False``.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelEffectsResults
Estimation results
Examples
--------
>>> from linearmodels import PanelOLS
>>> mod = PanelOLS(y, x, entity_effects=True)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Three covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer valued, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
|
def fit(
self,
*,
use_lsdv: bool = False,
use_lsmr: bool = False,
low_memory: bool | None = None,
cov_type: str = "unadjusted",
debiased: bool = True,
auto_df: bool = True,
count_effects: bool = True,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> PanelEffectsResults:
"""
Estimate model parameters
Parameters
----------
use_lsdv : bool
Flag indicating to use the Least Squares Dummy Variable estimator
to eliminate effects. The default value uses only means and does
note require constructing dummy variables for each effect.
use_lsmr : bool
Flag indicating to use LSDV with the Sparse Equations and Least
Squares estimator to eliminate the fixed effects.
low_memory : {bool, None}
Flag indicating whether to use a low-memory algorithm when a model
contains two-way fixed effects. If `None`, the choice is taken
automatically, and the low memory algorithm is used if the
required dummy variable array is both larger than then array of
regressors in the model and requires more than 1 GiB .
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
auto_df : bool
Flag indicating that the treatment of estimated effects in degree
of freedom adjustment is automatically handled. This is useful
since clustered standard errors that are clustered using the same
variable as an effect do not require degree of freedom correction
while other estimators such as the unadjusted covariance do.
count_effects : bool
Flag indicating that the covariance estimator should be adjusted
to account for the estimation of effects in the model. Only used
if ``auto_df=False``.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelEffectsResults
Estimation results
Examples
--------
>>> from linearmodels import PanelOLS
>>> mod = PanelOLS(y, x, entity_effects=True)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Three covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer valued, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
"""
weighted = np.any(self.weights.values2d != 1.0)
if use_lsmr:
y, x, ybar, y_effects, x_effects = self._lsmr_path()
elif use_lsdv:
y, x, ybar, y_effects, x_effects = self._slow_path()
else:
low_memory = (
self._choose_twoway_algo() if low_memory is None else low_memory
)
if not weighted:
y, x, ybar = self._fast_path(low_memory=low_memory)
y_effects = np.array([0.0])
x_effects = np.zeros(x.shape)
else:
y, x, ybar, y_effects, x_effects = self._weighted_fast_path(
low_memory=low_memory
)
neffects = 0
drop_first = self.has_constant
if self.entity_effects:
neffects += self.dependent.nentity - drop_first
drop_first = True
if self.time_effects:
neffects += self.dependent.nobs - drop_first
drop_first = True
if self.other_effects:
assert self._other_effect_cats is not None
oe = self._other_effect_cats.dataframe
for c in oe:
neffects += oe[c].nunique() - drop_first
drop_first = True
if self.entity_effects or self.time_effects or self.other_effects:
if not self._drop_absorbed:
check_absorbed(x, [str(var) for var in self.exog.vars])
else:
# TODO: Need to special case the constant here when determining which
# to retain since we always want to retain the constant if present
retain = not_absorbed(x, self._constant, self._constant_index)
if not retain:
raise ValueError(
"All columns in exog have been fully absorbed by the included"
" effects. This model cannot be estimated."
)
if len(retain) != x.shape[1]:
drop = set(range(x.shape[1])).difference(retain)
dropped = ", ".join([str(self.exog.vars[i]) for i in drop])
import warnings
warnings.warn(
absorbing_warn_msg.format(absorbed_variables=dropped),
AbsorbingEffectWarning,
stacklevel=2,
)
x = x[:, retain]
# Update constant index loc
if self._constant:
assert isinstance(self._constant_index, int)
self._constant_index = int(
np.squeeze(
np.argwhere(np.array(retain) == self._constant_index)
)
)
# Adjust exog
self.exog = PanelData(self.exog.dataframe.iloc[:, retain])
x_effects = x_effects[:, retain]
params = _lstsq(x, y, rcond=None)[0]
nobs = self.dependent.dataframe.shape[0]
df_model = x.shape[1] + neffects
df_resid = nobs - df_model
# Check clusters if singletons were removed
cov_config = self._setup_clusters(cov_config)
if auto_df:
count_effects = self._determine_df_adjustment(cov_type, **cov_config)
extra_df = neffects if count_effects else 0
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
y,
x,
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = y - x @ params
eps = weps
_y = self.dependent.values2d
_x = self.exog.values2d
if weighted:
eps = (_y - y_effects) - (_x - x_effects) @ params
if self.has_constant:
# Correction since y_effects and x_effects @ params add mean
w = self.weights.values2d
eps -= (w * eps).sum() / w.sum()
index = self.dependent.index
fitted = DataFrame(_x @ params, index, ["fitted_values"])
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
eps_effects = _y - fitted.values
sigma2_tot = float(np.squeeze(eps_effects.T @ eps_effects) / nobs)
sigma2_eps = float(np.squeeze(eps.T @ eps) / nobs)
sigma2_effects = sigma2_tot - sigma2_eps
rho = sigma2_effects / sigma2_tot if sigma2_tot > 0.0 else 0.0
resid_ss = float(np.squeeze(weps.T @ weps))
if self.has_constant:
mu = ybar
else:
mu = np.array([0.0])
total_ss = float(np.squeeze((y - mu).T @ (y - mu)))
r2 = 1 - resid_ss / total_ss if total_ss > 0.0 else 0.0
root_w = cast(Float64Array, np.sqrt(self.weights.values2d))
y_ex = root_w * self.dependent.values2d
mu_ex = 0
if (
self.has_constant
or self.entity_effects
or self.time_effects
or self.other_effects
):
mu_ex = root_w * ((root_w.T @ y_ex) / (root_w.T @ root_w))
total_ss_ex_effect = float(np.squeeze((y_ex - mu_ex).T @ (y_ex - mu_ex)))
r2_ex_effects = (
1 - resid_ss / total_ss_ex_effect if total_ss_ex_effect > 0.0 else 0.0
)
res = self._postestimation(params, cov, debiased, df_resid, weps, y, x, root_w)
######################################
# Pooled f-stat
######################################
if self.entity_effects or self.time_effects or self.other_effects:
wy, wx = root_w * self.dependent.values2d, root_w * self.exog.values2d
df_num, df_denom = (df_model - wx.shape[1]), df_resid
if not self.has_constant:
# Correction for when models does not have explicit constant
wy -= root_w * _lstsq(root_w, wy, rcond=None)[0]
wx -= root_w * _lstsq(root_w, wx, rcond=None)[0]
df_num -= 1
weps_pooled = wy - wx @ _lstsq(wx, wy, rcond=None)[0]
resid_ss_pooled = float(np.squeeze(weps_pooled.T @ weps_pooled))
num = (resid_ss_pooled - resid_ss) / df_num
denom = resid_ss / df_denom
stat = num / denom
f_pooled = WaldTestStatistic(
stat,
"Effects are zero",
df_num,
df_denom=df_denom,
name="Pooled F-statistic",
)
res.update(f_pooled=f_pooled)
effects = DataFrame(
eps_effects - eps,
columns=["estimated_effects"],
index=self.dependent.index,
)
else:
effects = DataFrame(
np.zeros_like(eps),
columns=["estimated_effects"],
index=self.dependent.index,
)
res.update(
dict(
df_resid=df_resid,
df_model=df_model,
nobs=y.shape[0],
residual_ss=resid_ss,
total_ss=total_ss,
wresids=weps,
resids=eps,
r2=r2,
entity_effects=self.entity_effects,
time_effects=self.time_effects,
other_effects=self.other_effects,
sigma2_eps=sigma2_eps,
sigma2_effects=sigma2_effects,
rho=rho,
r2_ex_effects=r2_ex_effects,
effects=effects,
fitted=fitted,
idiosyncratic=idiosyncratic,
)
)
return PanelEffectsResults(res)
|
(self, *, use_lsdv: bool = False, use_lsmr: bool = False, low_memory: Optional[bool] = None, cov_type: str = 'unadjusted', debiased: bool = True, auto_df: bool = True, count_effects: bool = True, **cov_config: bool | float | str | numpy.ndarray | pandas.core.frame.DataFrame | linearmodels.panel.data.PanelData) -> linearmodels.panel.results.PanelEffectsResults
|
43,038 |
linearmodels.panel.model
|
PooledOLS
|
Pooled coefficient estimator for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity)
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model specification.
Results may be numerically unstable if this check is skipped and
the matrix is not full rank.
Notes
-----
The model is given by
.. math::
y_{it}=\beta^{\prime}x_{it}+\epsilon_{it}
|
class PooledOLS(_PanelModelBase):
r"""
Pooled coefficient estimator for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity)
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model specification.
Results may be numerically unstable if this check is skipped and
the matrix is not full rank.
Notes
-----
The model is given by
.. math::
y_{it}=\beta^{\prime}x_{it}+\epsilon_{it}
"""
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: PanelDataLike | None = None,
check_rank: bool = True,
) -> None:
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
@classmethod
def from_formula(
cls,
formula: str,
data: PanelDataLike,
*,
weights: PanelDataLike | None = None,
check_rank: bool = True,
) -> PooledOLS:
"""
Create a model from a formula
Parameters
----------
formula : str
Formula to transform into model. Conforms to formulaic formula
rules.
data : array_like
Data structure that can be coerced into a PanelData. In most
cases, this should be a multi-index DataFrame where the level 0
index contains the entities and the level 1 contains the time.
weights: array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual times
the weight should be homoskedastic.
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model
specification. Results may be numerically unstable if this check
is skipped and the matrix is not full rank.
Returns
-------
PooledOLS
Model specified using the formula
Notes
-----
Unlike standard formula syntax, it is necessary to explicitly include
a constant using the constant indicator (1)
Examples
--------
>>> from linearmodels import PooledOLS
>>> from linearmodels.panel import generate_panel_data
>>> panel_data = generate_panel_data()
>>> mod = PooledOLS.from_formula("y ~ 1 + x1", panel_data.data)
>>> res = mod.fit()
"""
parser = PanelFormulaParser(formula, data, context=capture_context(1))
dependent, exog = parser.data
mod = cls(dependent, exog, weights=weights, check_rank=check_rank)
mod.formula = formula
return mod
def fit(
self,
*,
cov_type: str = "unadjusted",
debiased: bool = True,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> PanelResults:
"""
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelResults
Estimation results
Examples
--------
>>> from linearmodels import PooledOLS
>>> mod = PooledOLS(y, x)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Four covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
"""
y = self.dependent.values2d
x = self.exog.values2d
w = self.weights.values2d
root_w = cast(Float64Array, np.sqrt(w))
wx = root_w * x
wy = root_w * y
params = _lstsq(wx, wy, rcond=None)[0]
nobs = y.shape[0]
df_model = x.shape[1]
df_resid = nobs - df_model
cov_config = self._setup_clusters(cov_config)
extra_df = 0
if "extra_df" in cov_config:
cov_config = cov_config.copy()
_extra_df = cov_config.pop("extra_df")
assert isinstance(_extra_df, (str, int))
extra_df = int(_extra_df)
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
wy,
wx,
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = wy - wx @ params
index = self.dependent.index
fitted = DataFrame(x @ params, index, ["fitted_values"])
effects = DataFrame(
np.full_like(np.asarray(fitted), np.nan), index, ["estimated_effects"]
)
eps = y - fitted.values
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
residual_ss = float(np.squeeze(weps.T @ weps))
e = y
if self._constant:
e = e - (w * y).sum() / w.sum()
total_ss = float(np.squeeze(w.T @ (e**2)))
r2 = 1 - residual_ss / total_ss
res = self._postestimation(
params, cov, debiased, df_resid, weps, wy, wx, root_w
)
res.update(
dict(
df_resid=df_resid,
df_model=df_model,
nobs=y.shape[0],
residual_ss=residual_ss,
total_ss=total_ss,
r2=r2,
wresids=weps,
resids=eps,
index=self.dependent.index,
fitted=fitted,
effects=effects,
idiosyncratic=idiosyncratic,
)
)
return PanelResults(res)
def predict(
self,
params: ArrayLike,
*,
exog: PanelDataLike | None = None,
data: PanelDataLike | None = None,
eval_env: int = 1,
context: Mapping[str, Any] | None = None,
) -> DataFrame:
"""
Predict values for additional data
Parameters
----------
params : array_like
Model parameters (nvar by 1)
exog : array_like
Exogenous regressors (nobs by nvar)
data : DataFrame
Values to use when making predictions from a model constructed
from a formula
context : int
Depth to use when evaluating formulas.
Returns
-------
DataFrame
Fitted values from supplied data and parameters
Notes
-----
If `data` is not None, then `exog` must be None.
Predictions from models constructed using formulas can
be computed using either `exog`, which will treat these are
arrays of values corresponding to the formula-processed data, or using
`data` which will be processed using the formula used to construct the
values corresponding to the original model specification.
"""
if data is not None and self.formula is None:
raise ValueError(
"Unable to use data when the model was not " "created using a formula."
)
if data is not None and exog is not None:
raise ValueError(
"Predictions can only be constructed using one "
"of exog or data, but not both."
)
if exog is not None:
exog = PanelData(exog).dataframe
else:
assert self._formula is not None
assert data is not None
if context is None:
context = capture_context(eval_env)
parser = PanelFormulaParser(self._formula, data, context=context)
exog = parser.exog
x = exog.values
params = np.atleast_2d(np.asarray(params))
if params.shape[0] == 1:
params = params.T
if x.shape[1] != params.shape[0]:
raise ValueError(
EXOG_PREDICT_MSG.format(
x_shape=x.shape[1], params_shape=params.shape[0]
)
)
pred = DataFrame(x @ params, index=exog.index, columns=["predictions"])
return pred
|
(dependent: 'PanelDataLike', exog: 'PanelDataLike', *, weights: 'PanelDataLike | None' = None, check_rank: 'bool' = True) -> 'None'
|
43,039 |
linearmodels.panel.model
|
__init__
| null |
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: PanelDataLike | None = None,
check_rank: bool = True,
) -> None:
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
|
(self, dependent: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], exog: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, weights: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, check_rank: bool = True) -> NoneType
|
43,053 |
linearmodels.panel.model
|
fit
|
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelResults
Estimation results
Examples
--------
>>> from linearmodels import PooledOLS
>>> mod = PooledOLS(y, x)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Four covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
|
def fit(
self,
*,
cov_type: str = "unadjusted",
debiased: bool = True,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> PanelResults:
"""
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelResults
Estimation results
Examples
--------
>>> from linearmodels import PooledOLS
>>> mod = PooledOLS(y, x)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Four covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
"""
y = self.dependent.values2d
x = self.exog.values2d
w = self.weights.values2d
root_w = cast(Float64Array, np.sqrt(w))
wx = root_w * x
wy = root_w * y
params = _lstsq(wx, wy, rcond=None)[0]
nobs = y.shape[0]
df_model = x.shape[1]
df_resid = nobs - df_model
cov_config = self._setup_clusters(cov_config)
extra_df = 0
if "extra_df" in cov_config:
cov_config = cov_config.copy()
_extra_df = cov_config.pop("extra_df")
assert isinstance(_extra_df, (str, int))
extra_df = int(_extra_df)
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
wy,
wx,
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = wy - wx @ params
index = self.dependent.index
fitted = DataFrame(x @ params, index, ["fitted_values"])
effects = DataFrame(
np.full_like(np.asarray(fitted), np.nan), index, ["estimated_effects"]
)
eps = y - fitted.values
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
residual_ss = float(np.squeeze(weps.T @ weps))
e = y
if self._constant:
e = e - (w * y).sum() / w.sum()
total_ss = float(np.squeeze(w.T @ (e**2)))
r2 = 1 - residual_ss / total_ss
res = self._postestimation(
params, cov, debiased, df_resid, weps, wy, wx, root_w
)
res.update(
dict(
df_resid=df_resid,
df_model=df_model,
nobs=y.shape[0],
residual_ss=residual_ss,
total_ss=total_ss,
r2=r2,
wresids=weps,
resids=eps,
index=self.dependent.index,
fitted=fitted,
effects=effects,
idiosyncratic=idiosyncratic,
)
)
return PanelResults(res)
|
(self, *, cov_type: str = 'unadjusted', debiased: bool = True, **cov_config: bool | float | str | numpy.ndarray | pandas.core.frame.DataFrame | linearmodels.panel.data.PanelData) -> linearmodels.panel.results.PanelResults
|
43,054 |
linearmodels.panel.model
|
predict
|
Predict values for additional data
Parameters
----------
params : array_like
Model parameters (nvar by 1)
exog : array_like
Exogenous regressors (nobs by nvar)
data : DataFrame
Values to use when making predictions from a model constructed
from a formula
context : int
Depth to use when evaluating formulas.
Returns
-------
DataFrame
Fitted values from supplied data and parameters
Notes
-----
If `data` is not None, then `exog` must be None.
Predictions from models constructed using formulas can
be computed using either `exog`, which will treat these are
arrays of values corresponding to the formula-processed data, or using
`data` which will be processed using the formula used to construct the
values corresponding to the original model specification.
|
def predict(
self,
params: ArrayLike,
*,
exog: PanelDataLike | None = None,
data: PanelDataLike | None = None,
eval_env: int = 1,
context: Mapping[str, Any] | None = None,
) -> DataFrame:
"""
Predict values for additional data
Parameters
----------
params : array_like
Model parameters (nvar by 1)
exog : array_like
Exogenous regressors (nobs by nvar)
data : DataFrame
Values to use when making predictions from a model constructed
from a formula
context : int
Depth to use when evaluating formulas.
Returns
-------
DataFrame
Fitted values from supplied data and parameters
Notes
-----
If `data` is not None, then `exog` must be None.
Predictions from models constructed using formulas can
be computed using either `exog`, which will treat these are
arrays of values corresponding to the formula-processed data, or using
`data` which will be processed using the formula used to construct the
values corresponding to the original model specification.
"""
if data is not None and self.formula is None:
raise ValueError(
"Unable to use data when the model was not " "created using a formula."
)
if data is not None and exog is not None:
raise ValueError(
"Predictions can only be constructed using one "
"of exog or data, but not both."
)
if exog is not None:
exog = PanelData(exog).dataframe
else:
assert self._formula is not None
assert data is not None
if context is None:
context = capture_context(eval_env)
parser = PanelFormulaParser(self._formula, data, context=context)
exog = parser.exog
x = exog.values
params = np.atleast_2d(np.asarray(params))
if params.shape[0] == 1:
params = params.T
if x.shape[1] != params.shape[0]:
raise ValueError(
EXOG_PREDICT_MSG.format(
x_shape=x.shape[1], params_shape=params.shape[0]
)
)
pred = DataFrame(x @ params, index=exog.index, columns=["predictions"])
return pred
|
(self, params: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], *, exog: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, data: Union[linearmodels.panel.data.PanelData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None, eval_env: int = 1, context: Optional[collections.abc.Mapping[str, Any]] = None) -> pandas.core.frame.DataFrame
|
43,056 |
linearmodels.panel.model
|
RandomEffects
|
One-way Random Effects model for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity)
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
Notes
-----
The model is given by
.. math::
y_{it} = \beta^{\prime}x_{it} + u_i + \epsilon_{it}
where :math:`u_i` is a shock that is independent of :math:`x_{it}` but
common to all entities i.
|
class RandomEffects(_PanelModelBase):
r"""
One-way Random Effects model for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity)
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
Notes
-----
The model is given by
.. math::
y_{it} = \beta^{\prime}x_{it} + u_i + \epsilon_{it}
where :math:`u_i` is a shock that is independent of :math:`x_{it}` but
common to all entities i.
"""
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: PanelDataLike | None = None,
check_rank: bool = True,
) -> None:
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
@classmethod
def from_formula(
cls,
formula: str,
data: PanelDataLike,
*,
weights: PanelDataLike | None = None,
check_rank: bool = True,
) -> RandomEffects:
"""
Create a model from a formula
Parameters
----------
formula : str
Formula to transform into model. Conforms to formulaic formula
rules.
data : array_like
Data structure that can be coerced into a PanelData. In most
cases, this should be a multi-index DataFrame where the level 0
index contains the entities and the level 1 contains the time.
weights: array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual times
the weight should be homoskedastic.
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model
specification. Results may be numerically unstable if this check
is skipped and the matrix is not full rank.
Returns
-------
RandomEffects
Model specified using the formula
Notes
-----
Unlike standard formula syntax, it is necessary to explicitly include
a constant using the constant indicator (1)
Examples
--------
>>> from linearmodels import RandomEffects
>>> from linearmodels.panel import generate_panel_data
>>> panel_data = generate_panel_data()
>>> mod = RandomEffects.from_formula("y ~ 1 + x1", panel_data.data)
>>> res = mod.fit()
"""
parser = PanelFormulaParser(formula, data, context=capture_context(1))
dependent, exog = parser.data
mod = cls(dependent, exog, weights=weights, check_rank=check_rank)
mod.formula = formula
return mod
def fit(
self,
*,
small_sample: bool = False,
cov_type: str = "unadjusted",
debiased: bool = True,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> RandomEffectsResults:
"""
Estimate model parameters
Parameters
----------
small_sample : bool
Apply a small-sample correction to the estimate of the variance of
the random effect.
cov_type : str
Name of covariance estimator (see notes). Default is "unadjusted".
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
RandomEffectsResults
Estimation results
Examples
--------
>>> from linearmodels import RandomEffects
>>> mod = RandomEffects(y, x)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Four covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
"""
w = self.weights.values2d
root_w = cast(Float64Array, np.sqrt(w))
demeaned_dep = self.dependent.demean("entity", weights=self.weights)
demeaned_exog = self.exog.demean("entity", weights=self.weights)
assert isinstance(demeaned_dep, PanelData)
assert isinstance(demeaned_exog, PanelData)
y = demeaned_dep.values2d
x = demeaned_exog.values2d
if self.has_constant:
w_sum = w.sum()
y_gm = (w * self.dependent.values2d).sum(0) / w_sum
x_gm = (w * self.exog.values2d).sum(0) / w_sum
y += root_w * y_gm
x += root_w * x_gm
params = _lstsq(x, y, rcond=None)[0]
weps = y - x @ params
wybar = self.dependent.mean("entity", weights=self.weights)
wxbar = self.exog.mean("entity", weights=self.weights)
params = _lstsq(np.asarray(wxbar), np.asarray(wybar), rcond=None)[0]
wu = np.asarray(wybar) - np.asarray(wxbar) @ params
nobs = weps.shape[0]
neffects = wu.shape[0]
nvar = x.shape[1]
sigma2_e = float(np.squeeze(weps.T @ weps)) / (nobs - nvar - neffects + 1)
ssr = float(np.squeeze(wu.T @ wu))
t = np.asarray(self.dependent.count("entity"))
unbalanced = np.ptp(t) != 0
if small_sample and unbalanced:
ssr = float(np.squeeze((t * wu).T @ wu))
wx_df = cast(DataFrame, root_w * self.exog.dataframe)
means = wx_df.groupby(level=0).transform("mean").values
denom = means.T @ means
sums = wx_df.groupby(level=0).sum().values
num = sums.T @ sums
tr = np.trace(np.linalg.inv(denom) @ num)
sigma2_u = max(0, (ssr - (neffects - nvar) * sigma2_e) / (nobs - tr))
else:
t_bar = neffects / ((1.0 / t).sum())
sigma2_u = max(0, ssr / (neffects - nvar) - sigma2_e / t_bar)
rho = sigma2_u / (sigma2_u + sigma2_e)
theta = 1.0 - np.sqrt(sigma2_e / (t * sigma2_u + sigma2_e))
theta_out = DataFrame(theta, columns=["theta"], index=wybar.index)
wy: Float64Array = np.asarray(root_w * self.dependent.values2d, dtype=float)
wx: Float64Array = np.asarray(root_w * self.exog.values2d, dtype=float)
index = self.dependent.index
reindex = index.levels[0][index.codes[0]]
wybar = (theta * wybar).loc[reindex]
wxbar = (theta * wxbar).loc[reindex]
wy -= wybar.values
wx -= wxbar.values
params = _lstsq(wx, wy, rcond=None)[0]
df_resid = wy.shape[0] - wx.shape[1]
cov_config = self._setup_clusters(cov_config)
extra_df = 0
if "extra_df" in cov_config:
cov_config = cov_config.copy()
_extra_df = cov_config.pop("extra_df")
assert isinstance(_extra_df, (str, int))
extra_df = int(_extra_df)
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
wy,
np.asarray(wx),
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = wy - wx @ params
eps = weps / root_w
index = self.dependent.index
fitted = DataFrame(self.exog.values2d @ params, index, ["fitted_values"])
effects = DataFrame(
self.dependent.values2d - np.asarray(fitted) - eps,
index,
["estimated_effects"],
)
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
residual_ss = float(np.squeeze(weps.T @ weps))
wmu: float | Float64Array = 0.0
if self.has_constant:
wmu = root_w * _lstsq(root_w, wy, rcond=None)[0]
wy_demeaned = wy - wmu
total_ss = float(np.squeeze(wy_demeaned.T @ wy_demeaned))
r2 = 1 - residual_ss / total_ss
res = self._postestimation(
params, cov, debiased, df_resid, weps, wy, wx, root_w
)
res.update(
dict(
df_resid=df_resid,
df_model=x.shape[1],
nobs=y.shape[0],
residual_ss=residual_ss,
total_ss=total_ss,
r2=r2,
resids=eps,
wresids=weps,
index=index,
sigma2_eps=sigma2_e,
sigma2_effects=sigma2_u,
rho=rho,
theta=theta_out,
fitted=fitted,
effects=effects,
idiosyncratic=idiosyncratic,
)
)
return RandomEffectsResults(res)
|
(dependent: 'PanelDataLike', exog: 'PanelDataLike', *, weights: 'PanelDataLike | None' = None, check_rank: 'bool' = True) -> 'None'
|
43,071 |
linearmodels.panel.model
|
fit
|
Estimate model parameters
Parameters
----------
small_sample : bool
Apply a small-sample correction to the estimate of the variance of
the random effect.
cov_type : str
Name of covariance estimator (see notes). Default is "unadjusted".
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
RandomEffectsResults
Estimation results
Examples
--------
>>> from linearmodels import RandomEffects
>>> mod = RandomEffects(y, x)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Four covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
|
def fit(
self,
*,
small_sample: bool = False,
cov_type: str = "unadjusted",
debiased: bool = True,
**cov_config: bool | float | str | IntArray | DataFrame | PanelData,
) -> RandomEffectsResults:
"""
Estimate model parameters
Parameters
----------
small_sample : bool
Apply a small-sample correction to the estimate of the variance of
the random effect.
cov_type : str
Name of covariance estimator (see notes). Default is "unadjusted".
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
RandomEffectsResults
Estimation results
Examples
--------
>>> from linearmodels import RandomEffects
>>> mod = RandomEffects(y, x)
>>> res = mod.fit(cov_type="clustered", cluster_entity=True)
Notes
-----
Four covariance estimators are supported:
* "unadjusted", "homoskedastic" - Assume residual are homoskedastic
* "robust", "heteroskedastic" - Control for heteroskedasticity using
White's estimator
* "clustered` - One- or two-way clustering. Configuration options are:
* ``clusters`` - Input containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
* ``cluster_entity`` - Boolean flag indicating to use entity
clusters
* ``cluster_time`` - Boolean indicating to use time clusters
* "kernel" - Driscoll-Kraay HAC estimator. Configurations options are:
* ``kernel`` - One of the supported kernels (bartlett, parzen, qs).
Default is Bartlett's kernel, which is produces a covariance
estimator similar to the Newey-West covariance estimator.
* ``bandwidth`` - Bandwidth to use when computing the kernel. If
not provided, a naive default is used.
"""
w = self.weights.values2d
root_w = cast(Float64Array, np.sqrt(w))
demeaned_dep = self.dependent.demean("entity", weights=self.weights)
demeaned_exog = self.exog.demean("entity", weights=self.weights)
assert isinstance(demeaned_dep, PanelData)
assert isinstance(demeaned_exog, PanelData)
y = demeaned_dep.values2d
x = demeaned_exog.values2d
if self.has_constant:
w_sum = w.sum()
y_gm = (w * self.dependent.values2d).sum(0) / w_sum
x_gm = (w * self.exog.values2d).sum(0) / w_sum
y += root_w * y_gm
x += root_w * x_gm
params = _lstsq(x, y, rcond=None)[0]
weps = y - x @ params
wybar = self.dependent.mean("entity", weights=self.weights)
wxbar = self.exog.mean("entity", weights=self.weights)
params = _lstsq(np.asarray(wxbar), np.asarray(wybar), rcond=None)[0]
wu = np.asarray(wybar) - np.asarray(wxbar) @ params
nobs = weps.shape[0]
neffects = wu.shape[0]
nvar = x.shape[1]
sigma2_e = float(np.squeeze(weps.T @ weps)) / (nobs - nvar - neffects + 1)
ssr = float(np.squeeze(wu.T @ wu))
t = np.asarray(self.dependent.count("entity"))
unbalanced = np.ptp(t) != 0
if small_sample and unbalanced:
ssr = float(np.squeeze((t * wu).T @ wu))
wx_df = cast(DataFrame, root_w * self.exog.dataframe)
means = wx_df.groupby(level=0).transform("mean").values
denom = means.T @ means
sums = wx_df.groupby(level=0).sum().values
num = sums.T @ sums
tr = np.trace(np.linalg.inv(denom) @ num)
sigma2_u = max(0, (ssr - (neffects - nvar) * sigma2_e) / (nobs - tr))
else:
t_bar = neffects / ((1.0 / t).sum())
sigma2_u = max(0, ssr / (neffects - nvar) - sigma2_e / t_bar)
rho = sigma2_u / (sigma2_u + sigma2_e)
theta = 1.0 - np.sqrt(sigma2_e / (t * sigma2_u + sigma2_e))
theta_out = DataFrame(theta, columns=["theta"], index=wybar.index)
wy: Float64Array = np.asarray(root_w * self.dependent.values2d, dtype=float)
wx: Float64Array = np.asarray(root_w * self.exog.values2d, dtype=float)
index = self.dependent.index
reindex = index.levels[0][index.codes[0]]
wybar = (theta * wybar).loc[reindex]
wxbar = (theta * wxbar).loc[reindex]
wy -= wybar.values
wx -= wxbar.values
params = _lstsq(wx, wy, rcond=None)[0]
df_resid = wy.shape[0] - wx.shape[1]
cov_config = self._setup_clusters(cov_config)
extra_df = 0
if "extra_df" in cov_config:
cov_config = cov_config.copy()
_extra_df = cov_config.pop("extra_df")
assert isinstance(_extra_df, (str, int))
extra_df = int(_extra_df)
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
wy,
np.asarray(wx),
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = wy - wx @ params
eps = weps / root_w
index = self.dependent.index
fitted = DataFrame(self.exog.values2d @ params, index, ["fitted_values"])
effects = DataFrame(
self.dependent.values2d - np.asarray(fitted) - eps,
index,
["estimated_effects"],
)
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
residual_ss = float(np.squeeze(weps.T @ weps))
wmu: float | Float64Array = 0.0
if self.has_constant:
wmu = root_w * _lstsq(root_w, wy, rcond=None)[0]
wy_demeaned = wy - wmu
total_ss = float(np.squeeze(wy_demeaned.T @ wy_demeaned))
r2 = 1 - residual_ss / total_ss
res = self._postestimation(
params, cov, debiased, df_resid, weps, wy, wx, root_w
)
res.update(
dict(
df_resid=df_resid,
df_model=x.shape[1],
nobs=y.shape[0],
residual_ss=residual_ss,
total_ss=total_ss,
r2=r2,
resids=eps,
wresids=weps,
index=index,
sigma2_eps=sigma2_e,
sigma2_effects=sigma2_u,
rho=rho,
theta=theta_out,
fitted=fitted,
effects=effects,
idiosyncratic=idiosyncratic,
)
)
return RandomEffectsResults(res)
|
(self, *, small_sample: bool = False, cov_type: str = 'unadjusted', debiased: bool = True, **cov_config: bool | float | str | numpy.ndarray | pandas.core.frame.DataFrame | linearmodels.panel.data.PanelData) -> linearmodels.panel.results.RandomEffectsResults
|
43,074 |
linearmodels.system.model
|
SUR
|
Seemingly unrelated regression estimation (SUR/SURE)
Parameters
----------
equations : dict
Dictionary-like structure containing dependent and exogenous variable
values. Each key is an equations label and must be a string. Each
value must be either a tuple of the form (dependent,
exog, [weights]) or a dictionary with keys "dependent" and "exog" and
the optional key "weights".
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If not
provided, FGLS is implemented based on an estimate of sigma.
Notes
-----
Estimates a set of regressions which are seemingly unrelated in the sense
that separate estimation would lead to consistent parameter estimates.
Each equation is of the form
.. math::
y_{i,k} = x_{i,k}\beta_i + \epsilon_{i,k}
where k denotes the equation and i denoted the observation index. By
stacking vertically arrays of dependent and placing the exogenous
variables into a block diagonal array, the entire system can be compactly
expressed as
.. math::
Y = X\beta + \epsilon
where
.. math::
Y = \left[\begin{array}{x}Y_1 \\ Y_2 \\ \vdots \\ Y_K\end{array}\right]
and
.. math::
X = \left[\begin{array}{cccc}
X_1 & 0 & \ldots & 0 \\
0 & X_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & X_K
\end{array}\right]
The system OLS estimator is
.. math::
\hat{\beta}_{OLS} = (X'X)^{-1}X'Y
When certain conditions are satisfied, a GLS estimator of the form
.. math::
\hat{\beta}_{GLS} = (X'\Omega^{-1}X)^{-1}X'\Omega^{-1}Y
can improve accuracy of coefficient estimates where
.. math::
\Omega = \Sigma \otimes I_N
where :math:`\Sigma` is the covariance matrix of the residuals.
SUR is a special case of 3SLS where there are no endogenous regressors and
no instruments.
|
class SUR(_LSSystemModelBase):
r"""
Seemingly unrelated regression estimation (SUR/SURE)
Parameters
----------
equations : dict
Dictionary-like structure containing dependent and exogenous variable
values. Each key is an equations label and must be a string. Each
value must be either a tuple of the form (dependent,
exog, [weights]) or a dictionary with keys "dependent" and "exog" and
the optional key "weights".
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If not
provided, FGLS is implemented based on an estimate of sigma.
Notes
-----
Estimates a set of regressions which are seemingly unrelated in the sense
that separate estimation would lead to consistent parameter estimates.
Each equation is of the form
.. math::
y_{i,k} = x_{i,k}\beta_i + \epsilon_{i,k}
where k denotes the equation and i denoted the observation index. By
stacking vertically arrays of dependent and placing the exogenous
variables into a block diagonal array, the entire system can be compactly
expressed as
.. math::
Y = X\beta + \epsilon
where
.. math::
Y = \left[\begin{array}{x}Y_1 \\ Y_2 \\ \vdots \\ Y_K\end{array}\right]
and
.. math::
X = \left[\begin{array}{cccc}
X_1 & 0 & \ldots & 0 \\
0 & X_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & X_K
\end{array}\right]
The system OLS estimator is
.. math::
\hat{\beta}_{OLS} = (X'X)^{-1}X'Y
When certain conditions are satisfied, a GLS estimator of the form
.. math::
\hat{\beta}_{GLS} = (X'\Omega^{-1}X)^{-1}X'\Omega^{-1}Y
can improve accuracy of coefficient estimates where
.. math::
\Omega = \Sigma \otimes I_N
where :math:`\Sigma` is the covariance matrix of the residuals.
SUR is a special case of 3SLS where there are no endogenous regressors and
no instruments.
"""
def __init__(
self,
equations: Mapping[
str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]
],
*,
sigma: ArrayLike | None = None,
) -> None:
if not isinstance(equations, Mapping):
raise TypeError("equations must be a dictionary-like")
for key in equations:
if not isinstance(key, str):
raise ValueError("Equation labels (keys) must be strings")
reformatted = {}
for key in equations:
eqn = equations[key]
if isinstance(eqn, tuple):
if len(eqn) == 3:
w = eqn[-1]
eqn = eqn[:2]
eqn = eqn + (None, None) + (w,)
else:
eqn = eqn + (None, None)
reformatted[key] = eqn
super().__init__(reformatted, sigma=sigma)
self._model_name = "Seemingly Unrelated Regression (SUR)"
@classmethod
def multivariate_ls(cls, dependent: ArrayLike, exog: ArrayLike) -> SUR:
"""
Interface for specification of multivariate regression models
Parameters
----------
dependent : array_like
nobs by ndep array of dependent variables
exog : array_like
nobs by nvar array of exogenous regressors common to all models
Returns
-------
model : SUR
Model instance
Notes
-----
Utility function to simplify the construction of multivariate
regression models which all use the same regressors. Constructs
the dictionary of equations from the variables using the common
exogenous variable.
Examples
--------
A simple CAP-M can be estimated as a multivariate regression
>>> from linearmodels.datasets import french
>>> from linearmodels.system import SUR
>>> data = french.load()
>>> portfolios = data[["S1V1","S1V5","S5V1","S5V5"]]
>>> factors = data[["MktRF"]].copy()
>>> factors["alpha"] = 1
>>> mod = SUR.multivariate_ls(portfolios, factors)
"""
equations = {}
dependent_ivd = IVData(dependent, var_name="dependent")
exog_ivd = IVData(exog, var_name="exog")
for col in dependent_ivd.pandas:
# TODO: Bug in pandas-stubs
# https://github.com/pandas-dev/pandas-stubs/issues/97
equations[str(col)] = (dependent_ivd.pandas[[col]], exog_ivd.pandas)
return cls(equations)
@classmethod
def from_formula(
cls,
formula: str | dict[str, str],
data: DataFrame,
*,
sigma: ArrayLike | None = None,
weights: Mapping[str, ArrayLike] | None = None,
) -> SUR:
"""
Specify a SUR using the formula interface
Parameters
----------
formula : {str, dict[str, str]}
Either a string or a dictionary of strings where each value in
the dictionary represents a single equation. See Notes for a
description of the accepted syntax
data : DataFrame
Frame containing named variables
sigma : array_like
Prespecified residual covariance to use in GLS estimation. If
not provided, FGLS is implemented based on an estimate of sigma.
weights : dict[str, array_like]
Dictionary like object (e.g. a DataFrame) containing variable
weights. Each entry must have the same number of observations as
data. If an equation label is not a key weights, the weights will
be set to unity
Returns
-------
model : SUR
Model instance
Notes
-----
Models can be specified in one of two ways. The first uses curly
braces to encapsulate equations. The second uses a dictionary
where each key is an equation name.
Examples
--------
The simplest format uses standard formulas for each equation
in a dictionary. Best practice is to use an Ordered Dictionary
>>> import pandas as pd
>>> import numpy as np
>>> data = pd.DataFrame(np.random.randn(500, 4),
... columns=["y1", "x1_1", "y2", "x2_1"])
>>> from linearmodels.system import SUR
>>> formula = {"eq1": "y1 ~ 1 + x1_1", "eq2": "y2 ~ 1 + x2_1"}
>>> mod = SUR.from_formula(formula, data)
The second format uses curly braces {} to surround distinct equations
>>> formula = "{y1 ~ 1 + x1_1} {y2 ~ 1 + x2_1}"
>>> mod = SUR.from_formula(formula, data)
It is also possible to include equation labels when using curly braces
>>> formula = "{eq1: y1 ~ 1 + x1_1} {eq2: y2 ~ 1 + x2_1}"
>>> mod = SUR.from_formula(formula, data)
"""
context = capture_context(1)
parser = SystemFormulaParser(formula, data, weights, context=context)
eqns = parser.data
mod = cls(eqns, sigma=sigma)
mod.formula = formula
return mod
|
(equations: 'Mapping[str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]]', *, sigma: 'ArrayLike | None' = None) -> 'None'
|
43,075 |
linearmodels.system.model
|
__init__
| null |
def __init__(
self,
equations: Mapping[
str, Mapping[str, ArrayLike | None] | Sequence[ArrayLike | None]
],
*,
sigma: ArrayLike | None = None,
) -> None:
if not isinstance(equations, Mapping):
raise TypeError("equations must be a dictionary-like")
for key in equations:
if not isinstance(key, str):
raise ValueError("Equation labels (keys) must be strings")
reformatted = {}
for key in equations:
eqn = equations[key]
if isinstance(eqn, tuple):
if len(eqn) == 3:
w = eqn[-1]
eqn = eqn[:2]
eqn = eqn + (None, None) + (w,)
else:
eqn = eqn + (None, None)
reformatted[key] = eqn
super().__init__(reformatted, sigma=sigma)
self._model_name = "Seemingly Unrelated Regression (SUR)"
|
(self, equations: collections.abc.Mapping[str, collections.abc.Mapping[str, typing.Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]] | collections.abc.Sequence[typing.Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType]]], *, sigma: Union[numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series, NoneType] = None) -> NoneType
|
43,094 |
linearmodels.asset_pricing.model
|
TradedFactorModel
|
Linear factor models estimator applicable to traded factors
Parameters
----------
portfolios : array_like
Test portfolio returns (nobs by nportfolio)
factors : array_like
Priced factor returns (nobs by nfactor)
Notes
-----
Implements both time-series estimators of risk premia, factor loadings
and zero-alpha tests.
The model estimated is
.. math::
r_{it}^e = \alpha_i + f_t \beta_i + \epsilon_{it}
where :math:`r_{it}^e` is the excess return on test portfolio i and
:math:`f_t` are the traded factor returns. The model is directly
tested using the estimated values :math:`\hat{\alpha}_i`. Risk premia,
:math:`\lambda_i` are estimated using the sample averages of the factors,
which must be excess returns on traded portfolios.
|
class TradedFactorModel(_FactorModelBase):
r"""
Linear factor models estimator applicable to traded factors
Parameters
----------
portfolios : array_like
Test portfolio returns (nobs by nportfolio)
factors : array_like
Priced factor returns (nobs by nfactor)
Notes
-----
Implements both time-series estimators of risk premia, factor loadings
and zero-alpha tests.
The model estimated is
.. math::
r_{it}^e = \alpha_i + f_t \beta_i + \epsilon_{it}
where :math:`r_{it}^e` is the excess return on test portfolio i and
:math:`f_t` are the traded factor returns. The model is directly
tested using the estimated values :math:`\hat{\alpha}_i`. Risk premia,
:math:`\lambda_i` are estimated using the sample averages of the factors,
which must be excess returns on traded portfolios.
"""
def __init__(self, portfolios: IVDataLike, factors: IVDataLike):
super().__init__(portfolios, factors)
@classmethod
def from_formula(
cls, formula: str, data: DataFrame, *, portfolios: DataFrame | None = None
) -> TradedFactorModel:
"""
Parameters
----------
formula : str
Formula modified for the syntax described in the notes
data : DataFrame
DataFrame containing the variables used in the formula
portfolios : array_like
Portfolios to be used in the model
Returns
-------
TradedFactorModel
Model instance
Notes
-----
The formula can be used in one of two ways. The first specified only the
factors and uses the data provided in ``portfolios`` as the test portfolios.
The second specified the portfolio using ``+`` to separate the test portfolios
and ``~`` to separate the test portfolios from the factors.
Examples
--------
>>> from linearmodels.datasets import french
>>> from linearmodels.asset_pricing import TradedFactorModel
>>> data = french.load()
>>> formula = "S1M1 + S1M5 + S3M3 + S5M1 + S5M5 ~ MktRF + SMB + HML"
>>> mod = TradedFactorModel.from_formula(formula, data)
Using only factors
>>> portfolios = data[["S1M1", "S1M5", "S3M1", "S3M5", "S5M1", "S5M5"]]
>>> formula = "MktRF + SMB + HML"
>>> mod = TradedFactorModel.from_formula(formula, data, portfolios=portfolios)
"""
factors, portfolios, formula = cls._prepare_data_from_formula(
formula, data, portfolios
)
mod = cls(portfolios, factors)
mod.formula = formula
return mod
def fit(
self,
cov_type: str = "robust",
debiased: bool = True,
**cov_config: str | float,
) -> LinearFactorModelResults:
"""
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
**cov_config : dict
Additional covariance-specific options. See Notes.
Returns
-------
LinearFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
Supported covariance estimators are:
* "robust" - Heteroskedasticity-robust covariance estimator
* "kernel" - Heteroskedasticity and Autocorrelation consistent (HAC)
covariance estimator
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
"""
p = self.portfolios.ndarray
f = self.factors.ndarray
nportfolio = p.shape[1]
nobs, nfactor = f.shape
fc = np.c_[np.ones((nobs, 1)), f]
rp = f.mean(0)[:, None]
fe = f - f.mean(0)
b = np.linalg.pinv(fc) @ p
eps = p - fc @ b
alphas = b[:1].T
nloading = (nfactor + 1) * nportfolio
xpxi = np.eye(nloading + nfactor)
xpxi[:nloading, :nloading] = np.kron(
np.eye(nportfolio), np.linalg.pinv(fc.T @ fc / nobs)
)
f_rep = np.tile(fc, (1, nportfolio))
eps_rep = np.tile(eps, (nfactor + 1, 1)) # 1 2 3 ... 25 1 2 3 ...
eps_rep = eps_rep.ravel(order="F")
eps_rep = np.reshape(eps_rep, (nobs, (nfactor + 1) * nportfolio), order="F")
xe = f_rep * eps_rep
xe = np.c_[xe, fe]
if cov_type in ("robust", "heteroskedastic"):
cov_est = HeteroskedasticCovariance(
xe, inv_jacobian=xpxi, center=False, debiased=debiased, df=fc.shape[1]
)
rp_cov_est = HeteroskedasticCovariance(
fe, jacobian=np.eye(f.shape[1]), center=False, debiased=debiased, df=1
)
elif cov_type == "kernel":
kernel = get_string(cov_config, "kernel")
bandwidth = get_float(cov_config, "bandwidth")
cov_est = KernelCovariance(
xe,
inv_jacobian=xpxi,
center=False,
debiased=debiased,
df=fc.shape[1],
bandwidth=bandwidth,
kernel=kernel,
)
bw = cov_est.bandwidth
_cov_config = {k: v for k, v in cov_config.items()}
_cov_config["bandwidth"] = bw
rp_cov_est = KernelCovariance(
fe,
jacobian=np.eye(f.shape[1]),
center=False,
debiased=debiased,
df=1,
bandwidth=bw,
kernel=kernel,
)
else:
raise ValueError(f"Unknown cov_type: {cov_type}")
full_vcv = cov_est.cov
rp_cov = rp_cov_est.cov
vcv = full_vcv[:nloading, :nloading]
# Rearrange VCV
order = np.reshape(
np.arange((nfactor + 1) * nportfolio), (nportfolio, nfactor + 1)
)
order = order.T.ravel()
vcv = vcv[order][:, order]
# Return values
alpha_vcv = vcv[:nportfolio, :nportfolio]
stat = float(np.squeeze(alphas.T @ np.linalg.pinv(alpha_vcv) @ alphas))
jstat = WaldTestStatistic(
stat, "All alphas are 0", nportfolio, name="J-statistic"
)
params = b.T
betas = b[1:].T
residual_ss = (eps**2).sum()
e = p - p.mean(0)[None, :]
total_ss = (e**2).sum()
r2 = 1 - residual_ss / total_ss
param_names = []
for portfolio in self.portfolios.cols:
param_names.append(f"alpha-{portfolio}")
for factor in self.factors.cols:
param_names.append(f"beta-{portfolio}-{factor}")
for factor in self.factors.cols:
param_names.append(f"lambda-{factor}")
res = AttrDict(
params=params,
cov=full_vcv,
betas=betas,
rp=rp,
rp_cov=rp_cov,
alphas=alphas,
alpha_vcv=alpha_vcv,
jstat=jstat,
rsquared=r2,
total_ss=total_ss,
residual_ss=residual_ss,
param_names=param_names,
portfolio_names=self.portfolios.cols,
factor_names=self.factors.cols,
name=self._name,
cov_type=cov_type,
model=self,
nobs=nobs,
rp_names=self.factors.cols,
cov_est=cov_est,
)
return LinearFactorModelResults(res)
|
(portfolios: 'IVDataLike', factors: 'IVDataLike')
|
43,095 |
linearmodels.asset_pricing.model
|
__init__
| null |
def __init__(self, portfolios: IVDataLike, factors: IVDataLike):
super().__init__(portfolios, factors)
|
(self, portfolios: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series], factors: Union[linearmodels.iv.data.IVData, numpy.ndarray, pandas.core.frame.DataFrame, pandas.core.series.Series])
|
43,097 |
linearmodels.asset_pricing.model
|
__str__
| null |
def __str__(self) -> str:
out = self.__class__.__name__
f, p = self.factors.shape[1], self.portfolios.shape[1]
out += f" with {f} factors, {p} test portfolios"
return out
|
(self) -> str
|
43,101 |
linearmodels.asset_pricing.model
|
fit
|
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
**cov_config : dict
Additional covariance-specific options. See Notes.
Returns
-------
LinearFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
Supported covariance estimators are:
* "robust" - Heteroskedasticity-robust covariance estimator
* "kernel" - Heteroskedasticity and Autocorrelation consistent (HAC)
covariance estimator
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
|
def fit(
self,
cov_type: str = "robust",
debiased: bool = True,
**cov_config: str | float,
) -> LinearFactorModelResults:
"""
Estimate model parameters
Parameters
----------
cov_type : str
Name of covariance estimator
debiased : bool
Flag indicating whether to debias the covariance estimator using
a degree of freedom adjustment
**cov_config : dict
Additional covariance-specific options. See Notes.
Returns
-------
LinearFactorModelResults
Results class with parameter estimates, covariance and test statistics
Notes
-----
Supported covariance estimators are:
* "robust" - Heteroskedasticity-robust covariance estimator
* "kernel" - Heteroskedasticity and Autocorrelation consistent (HAC)
covariance estimator
The kernel covariance estimator takes the optional arguments
``kernel``, one of "bartlett", "parzen" or "qs" (quadratic spectral)
and ``bandwidth`` (a positive integer).
"""
p = self.portfolios.ndarray
f = self.factors.ndarray
nportfolio = p.shape[1]
nobs, nfactor = f.shape
fc = np.c_[np.ones((nobs, 1)), f]
rp = f.mean(0)[:, None]
fe = f - f.mean(0)
b = np.linalg.pinv(fc) @ p
eps = p - fc @ b
alphas = b[:1].T
nloading = (nfactor + 1) * nportfolio
xpxi = np.eye(nloading + nfactor)
xpxi[:nloading, :nloading] = np.kron(
np.eye(nportfolio), np.linalg.pinv(fc.T @ fc / nobs)
)
f_rep = np.tile(fc, (1, nportfolio))
eps_rep = np.tile(eps, (nfactor + 1, 1)) # 1 2 3 ... 25 1 2 3 ...
eps_rep = eps_rep.ravel(order="F")
eps_rep = np.reshape(eps_rep, (nobs, (nfactor + 1) * nportfolio), order="F")
xe = f_rep * eps_rep
xe = np.c_[xe, fe]
if cov_type in ("robust", "heteroskedastic"):
cov_est = HeteroskedasticCovariance(
xe, inv_jacobian=xpxi, center=False, debiased=debiased, df=fc.shape[1]
)
rp_cov_est = HeteroskedasticCovariance(
fe, jacobian=np.eye(f.shape[1]), center=False, debiased=debiased, df=1
)
elif cov_type == "kernel":
kernel = get_string(cov_config, "kernel")
bandwidth = get_float(cov_config, "bandwidth")
cov_est = KernelCovariance(
xe,
inv_jacobian=xpxi,
center=False,
debiased=debiased,
df=fc.shape[1],
bandwidth=bandwidth,
kernel=kernel,
)
bw = cov_est.bandwidth
_cov_config = {k: v for k, v in cov_config.items()}
_cov_config["bandwidth"] = bw
rp_cov_est = KernelCovariance(
fe,
jacobian=np.eye(f.shape[1]),
center=False,
debiased=debiased,
df=1,
bandwidth=bw,
kernel=kernel,
)
else:
raise ValueError(f"Unknown cov_type: {cov_type}")
full_vcv = cov_est.cov
rp_cov = rp_cov_est.cov
vcv = full_vcv[:nloading, :nloading]
# Rearrange VCV
order = np.reshape(
np.arange((nfactor + 1) * nportfolio), (nportfolio, nfactor + 1)
)
order = order.T.ravel()
vcv = vcv[order][:, order]
# Return values
alpha_vcv = vcv[:nportfolio, :nportfolio]
stat = float(np.squeeze(alphas.T @ np.linalg.pinv(alpha_vcv) @ alphas))
jstat = WaldTestStatistic(
stat, "All alphas are 0", nportfolio, name="J-statistic"
)
params = b.T
betas = b[1:].T
residual_ss = (eps**2).sum()
e = p - p.mean(0)[None, :]
total_ss = (e**2).sum()
r2 = 1 - residual_ss / total_ss
param_names = []
for portfolio in self.portfolios.cols:
param_names.append(f"alpha-{portfolio}")
for factor in self.factors.cols:
param_names.append(f"beta-{portfolio}-{factor}")
for factor in self.factors.cols:
param_names.append(f"lambda-{factor}")
res = AttrDict(
params=params,
cov=full_vcv,
betas=betas,
rp=rp,
rp_cov=rp_cov,
alphas=alphas,
alpha_vcv=alpha_vcv,
jstat=jstat,
rsquared=r2,
total_ss=total_ss,
residual_ss=residual_ss,
param_names=param_names,
portfolio_names=self.portfolios.cols,
factor_names=self.factors.cols,
name=self._name,
cov_type=cov_type,
model=self,
nobs=nobs,
rp_names=self.factors.cols,
cov_est=cov_est,
)
return LinearFactorModelResults(res)
|
(self, cov_type: str = 'robust', debiased: bool = True, **cov_config: str | float) -> linearmodels.asset_pricing.results.LinearFactorModelResults
|
43,123 |
linearmodels
|
test
| null |
def test(
extra_args: str | list[str] | None = None,
exit: bool = True,
append: bool = True,
location: str = "",
) -> int:
import sys
try:
import pytest
except ImportError: # pragma: no cover
raise ImportError("Need pytest to run tests")
cmd = ["--tb=auto"]
if extra_args:
if not isinstance(extra_args, list):
pytest_args = [extra_args]
else:
pytest_args = extra_args
if append:
cmd += pytest_args[:]
else:
cmd = pytest_args
print(location)
pkg = os.path.dirname(__file__)
print(pkg)
if location:
pkg = os.path.abspath(os.path.join(pkg, location))
print(pkg)
if not os.path.exists(pkg):
raise RuntimeError(f"{pkg} was not found. Unable to run tests")
cmd = [pkg] + cmd
print("running: pytest {}".format(" ".join(cmd)))
status = pytest.main(cmd)
if exit: # pragma: no cover
sys.exit(status)
return status
|
(extra_args: Union[str, list[str], NoneType] = None, exit: bool = True, append: bool = True, location: str = '') -> int
|
43,125 |
ltxpdflinks._extractor
|
ExtractedGraphicLinks
| null |
class ExtractedGraphicLinks:
def __init__(self, graphic_fname, size, links, *, unitlength='1bp', **kwargs):
self.graphic_fname = graphic_fname
self.unitlength = unitlength # LaTeX length
(w,h) = size
self.size = (float(w),float(h)) # (width, height)
self.links = links
self.dic = dict(kwargs)
def __repr__(self):
return self.__class__.__name__ + \
'({!r}, {!r}, {!r}, unitlength={!r}, **{!r})'.format(
self.graphic_fname,
self.size,
self.links,
self.unitlength,
self.dic
)
|
(graphic_fname, size, links, *, unitlength='1bp', **kwargs)
|
43,126 |
ltxpdflinks._extractor
|
__init__
| null |
def __init__(self, graphic_fname, size, links, *, unitlength='1bp', **kwargs):
self.graphic_fname = graphic_fname
self.unitlength = unitlength # LaTeX length
(w,h) = size
self.size = (float(w),float(h)) # (width, height)
self.links = links
self.dic = dict(kwargs)
|
(self, graphic_fname, size, links, *, unitlength='1bp', **kwargs)
|
43,127 |
ltxpdflinks._extractor
|
__repr__
| null |
def __repr__(self):
return self.__class__.__name__ + \
'({!r}, {!r}, {!r}, unitlength={!r}, **{!r})'.format(
self.graphic_fname,
self.size,
self.links,
self.unitlength,
self.dic
)
|
(self)
|
43,128 |
ltxpdflinks._extractor
|
ExtractedLink
|
..........
- `link_bbox` is relative to the page bottom left corner in user space
units, given as `(x, y, w, h)`
- `link_type` is one of 'URI', 'latex-ref', 'latex-cite'. When extracting
special types of URLs (e.g. 'latexpdf://xxx/xxx') then the extracted type
is 'URI' with the special URI, then use a relevant SpecialLinkConverter to
convert the special URI's to specialized types.
|
class ExtractedLink:
"""
..........
- `link_bbox` is relative to the page bottom left corner in user space
units, given as `(x, y, w, h)`
- `link_type` is one of 'URI', 'latex-ref', 'latex-cite'. When extracting
special types of URLs (e.g. 'latexpdf://xxx/xxx') then the extracted type
is 'URI' with the special URI, then use a relevant SpecialLinkConverter to
convert the special URI's to specialized types.
"""
def __init__(self, link_bbox, link_type, link_target, **kwargs):
super().__init__()
x, y, w, h = link_bbox
self.link_bbox = (float(x), float(y), float(w), float(h))
self.link_type = link_type
self.link_target = link_target
self.dic = dict(kwargs)
def __repr__(self):
return self.__class__.__name__ + '({!r}, {!r}, {!r}, **{!r})'.format(
self.link_bbox,
self.link_type,
self.link_target,
self.dic
)
|
(link_bbox, link_type, link_target, **kwargs)
|
43,129 |
ltxpdflinks._extractor
|
__init__
| null |
def __init__(self, link_bbox, link_type, link_target, **kwargs):
super().__init__()
x, y, w, h = link_bbox
self.link_bbox = (float(x), float(y), float(w), float(h))
self.link_type = link_type
self.link_target = link_target
self.dic = dict(kwargs)
|
(self, link_bbox, link_type, link_target, **kwargs)
|
43,130 |
ltxpdflinks._extractor
|
__repr__
| null |
def __repr__(self):
return self.__class__.__name__ + '({!r}, {!r}, {!r}, **{!r})'.format(
self.link_bbox,
self.link_type,
self.link_target,
self.dic
)
|
(self)
|
43,131 |
ltxpdflinks._linkconverter
|
LatexRefsLinkConverter
| null |
class LatexRefsLinkConverter:
def __init__(self):
super().__init__()
def convertLinks(self, extracted_links):
"""
Go over URI hyperlinks, and convert those whose "protocol" in the URL is
"latexref".
Conversion is performed in-place, directly modifying the input object
hierarchy. This function doesn't return anything.
Argument `extracted_links` is a :py:class:`ExtractedGraphicLinks` instance.
"""
for lnk in extracted_links.links:
if lnk.link_type == 'URI':
uri = lnk.link_target
m = _rx_latexrefurl.match(uri)
if m is None:
continue
# found match! change link type.
ref_type, ref_target = m.group('ref_type'), m.group('ref_target')
ref_target = urllib.parse.unquote(ref_target)
if ref_type == 'ref':
lnk.link_type = 'latex-ref'
lnk.link_target = ref_target
continue
if ref_type == 'cite':
lnk.link_type = 'latex-cite'
lnk.link_target = ref_target
continue
logger.warning("Unsupported ref_type in special URL %r", uri)
# done! everything was modified in-place, so we don't return anything
return
|
()
|
43,132 |
ltxpdflinks._linkconverter
|
__init__
| null |
def __init__(self):
super().__init__()
|
(self)
|
43,133 |
ltxpdflinks._linkconverter
|
convertLinks
|
Go over URI hyperlinks, and convert those whose "protocol" in the URL is
"latexref".
Conversion is performed in-place, directly modifying the input object
hierarchy. This function doesn't return anything.
Argument `extracted_links` is a :py:class:`ExtractedGraphicLinks` instance.
|
def convertLinks(self, extracted_links):
"""
Go over URI hyperlinks, and convert those whose "protocol" in the URL is
"latexref".
Conversion is performed in-place, directly modifying the input object
hierarchy. This function doesn't return anything.
Argument `extracted_links` is a :py:class:`ExtractedGraphicLinks` instance.
"""
for lnk in extracted_links.links:
if lnk.link_type == 'URI':
uri = lnk.link_target
m = _rx_latexrefurl.match(uri)
if m is None:
continue
# found match! change link type.
ref_type, ref_target = m.group('ref_type'), m.group('ref_target')
ref_target = urllib.parse.unquote(ref_target)
if ref_type == 'ref':
lnk.link_type = 'latex-ref'
lnk.link_target = ref_target
continue
if ref_type == 'cite':
lnk.link_type = 'latex-cite'
lnk.link_target = ref_target
continue
logger.warning("Unsupported ref_type in special URL %r", uri)
# done! everything was modified in-place, so we don't return anything
return
|
(self, extracted_links)
|
43,134 |
ltxpdflinks._lplxexporter
|
LplxPictureEnvExporter
| null |
class LplxPictureEnvExporter:
def __init__(self, *, include_comments_catcode=False):
super().__init__()
self.include_comments_catcode = include_comments_catcode
def export(self, extractedgraphiclinks):
e = extractedgraphiclinks # shorthand
graphic_basefname, graphic_ext = os.path.splitext(e.graphic_fname)
s = ""
if self.include_comments_catcode:
s += r"""\catcode`\%=14\relax""" + "\n"
s += (
r"""% Automatically generated by ltxpdflinks """ + version_str + r""" on """ +
datetime.datetime.now().isoformat() + r"""
%
% LPLX - """ + _makeltxsafe(e.graphic_fname) + r"""
%
\LPLX{version=0,ltxpdflinksversion={""" + version_str +
r"""},features={bbox}}{%
\lplxGraphic{""" + _makeltxsafe(graphic_basefname) + r"""}{"""
+ _makeltxsafe(graphic_ext) + r"""}%
\lplxUserSpaceUnitLength{""" + e.unitlength + r"""}%
\lplxSetBbox{0}{0}""" + "{{{:.6g}}}{{{:.6g}}}".format(e.size[0], e.size[1]) + r"""%
%%BoundingBox: 0 0 """ + "{:d} {:d}".format(int(e.size[0]+0.5), int(e.size[1]+0.5)) + r"""
%%HiResBoundingBox: 0 0 """ + "{:.6g} {:.6g}".format(e.size[0], e.size[1]) + r"""
\lplxPicture{%
"""
)
for el in e.links:
x, y, w, h = el.link_bbox
lplxcmd = r'\lplxPutLink'
lplxtailargs = ''
if el.link_type == 'URI':
hrstart = r"""\href{{{tgt}}}""".format(tgt=_makeltxsafe(el.link_target))
lplxtailargs = '{{{hrstart}}}{{}}'.format(hrstart=hrstart)
elif el.link_type == 'latex-ref':
hrstart = r"""\hyperref[{{{tgt}}}]""".format(tgt=_makeltxsafe(el.link_target))
lplxtailargs = '{{{hrstart}}}{{}}'.format(hrstart=hrstart)
elif el.link_type == 'latex-cite':
hrstart = r"""\hyperlink{{cite.{tgt}}}""".format(tgt=_makeltxsafe(el.link_target))
lplxtailargs = '{{{hrstart}}}{{}}'.format(hrstart=hrstart)
elif el.link_type == 'latex-box':
lplxcmd, lplxtailargs = _make_latexbox_from_url(el.link_target, el)
else:
logger.warning("Ignoring link with unsupported link_type: %r", el)
continue
# s += (
# r"\put({x},{y})".format(x=el.link_bbox[0], y=el.link_bbox[1]) +
# "{" + s2 + "}\n"
# )
s += (
r"{lplxcmd}{{{x:.8g}}}{{{y:.8g}}}{{{w:.8g}}}{{{h:.8g}}}{lplxtailargs}"
.format(lplxcmd=lplxcmd, x=x, y=y, w=w, h=h, lplxtailargs=lplxtailargs)
+ r"%" + "\n"
)
s += r"""}}%""" + "\n"
return s
|
(*, include_comments_catcode=False)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.