code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
---|---|---|---|---|---|
'''
Evaluates the interpolated function and its derivative at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
'''
z = np.asarray(x)
y, dydx = self._evalAndDer(z.flatten())
return y.reshape(z.shape), dydx.reshape(z.shape)
|
def eval_with_derivative(self,x)
|
Evaluates the interpolated function and its derivative at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
| 3.023852 | 1.591909 | 1.899514 |
'''
Evaluates the partial derivative of interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative of the interpolated function with respect to x, eval-
uated at x,y: dfdx = f_x(x,y), with the same shape as x and y.
'''
xa = np.asarray(x)
ya = np.asarray(y)
return (self._derX(xa.flatten(),ya.flatten())).reshape(xa.shape)
|
def derivativeX(self,x,y)
|
Evaluates the partial derivative of interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative of the interpolated function with respect to x, eval-
uated at x,y: dfdx = f_x(x,y), with the same shape as x and y.
| 3.916793 | 1.503959 | 2.604321 |
'''
Evaluates the partial derivative of the interpolated function with respect
to z (the third argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function evaluated
at x,y,z: dfdz = f_z(x,y,z), with the same shape as x, y, and z.
'''
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derZ(xa.flatten(),ya.flatten(),za.flatten())).reshape(xa.shape)
|
def derivativeZ(self,x,y,z)
|
Evaluates the partial derivative of the interpolated function with respect
to z (the third argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function evaluated
at x,y,z: dfdz = f_z(x,y,z), with the same shape as x, y, and z.
| 2.81548 | 1.412124 | 1.993791 |
'''
Evaluates the partial derivative with respect to w (the first argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdw : np.array or float
The derivative with respect to w of the interpolated function eval-
uated at w,x,y,z: dfdw = f_w(w,x,y,z), with the same shape as inputs.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derW(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
|
def derivativeW(self,w,x,y,z)
|
Evaluates the partial derivative with respect to w (the first argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdw : np.array or float
The derivative with respect to w of the interpolated function eval-
uated at w,x,y,z: dfdw = f_w(w,x,y,z), with the same shape as inputs.
| 2.621141 | 1.334576 | 1.964025 |
'''
Evaluates the partial derivative with respect to y (the third argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function eval-
uated at w,x,y,z: dfdy = f_y(w,x,y,z), with the same shape as inputs.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derY(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
|
def derivativeY(self,w,x,y,z)
|
Evaluates the partial derivative with respect to y (the third argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function eval-
uated at w,x,y,z: dfdy = f_y(w,x,y,z), with the same shape as inputs.
| 2.584193 | 1.324509 | 1.951057 |
'''
Returns the derivative of the function with respect to the first dimension.
'''
if self.i_dim == 0:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
|
def derivative(self,*args)
|
Returns the derivative of the function with respect to the first dimension.
| 4.641074 | 3.542747 | 1.310021 |
'''
Returns the derivative of the function with respect to the X dimension.
This is the first input whenever n_dims < 4 and the second input otherwise.
'''
if self.n_dims >= 4:
j = 1
else:
j = 0
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
|
def derivativeX(self,*args)
|
Returns the derivative of the function with respect to the X dimension.
This is the first input whenever n_dims < 4 and the second input otherwise.
| 5.234828 | 2.441041 | 2.144506 |
'''
Returns the derivative of the function with respect to the W dimension.
This should only exist when n_dims >= 4.
'''
if self.n_dims >= 4:
j = 0
else:
assert False, "Derivative with respect to W can't be called when n_dims < 4!"
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
|
def derivativeW(self,*args)
|
Returns the derivative of the function with respect to the W dimension.
This should only exist when n_dims >= 4.
| 5.086683 | 3.087258 | 1.647638 |
'''
Evaluate the derivative of the function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists). This is an array of zeros.
'''
if len(args) > 0:
if _isscalar(args[0]):
return 0.0
else:
shape = args[0].shape
return np.zeros(shape)
else:
return 0.0
|
def _der(self,*args)
|
Evaluate the derivative of the function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists). This is an array of zeros.
| 4.764169 | 1.825933 | 2.60917 |
'''
Returns the level and/or first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Parameters
----------
x_list : scalar or np.array
Set of points where we want to evlauate the interpolated function and/or its derivative..
_eval : boolean
Indicator for whether to evalute the level of the interpolated function.
_Der : boolean
Indicator for whether to evaluate the derivative of the interpolated function.
Returns
-------
A list including the level and/or derivative of the interpolated function where requested.
'''
i = np.maximum(np.searchsorted(self.x_list[:-1],x),1)
alpha = (x-self.x_list[i-1])/(self.x_list[i]-self.x_list[i-1])
if _eval:
y = (1.-alpha)*self.y_list[i-1] + alpha*self.y_list[i]
if _Der:
dydx = (self.y_list[i] - self.y_list[i-1])/(self.x_list[i] - self.x_list[i-1])
if not self.lower_extrap:
below_lower_bound = x < self.x_list[0]
if _eval:
y[below_lower_bound] = np.nan
if _Der:
dydx[below_lower_bound] = np.nan
if self.decay_extrap:
above_upper_bound = x > self.x_list[-1]
x_temp = x[above_upper_bound] - self.x_list[-1]
if _eval:
y[above_upper_bound] = self.intercept_limit + \
self.slope_limit*x[above_upper_bound] - \
self.decay_extrap_A*np.exp(-self.decay_extrap_B*x_temp)
if _Der:
dydx[above_upper_bound] = self.slope_limit + \
self.decay_extrap_B*self.decay_extrap_A*\
np.exp(-self.decay_extrap_B*x_temp)
output = []
if _eval:
output += [y,]
if _Der:
output += [dydx,]
return output
|
def _evalOrDer(self,x,_eval,_Der)
|
Returns the level and/or first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Parameters
----------
x_list : scalar or np.array
Set of points where we want to evlauate the interpolated function and/or its derivative..
_eval : boolean
Indicator for whether to evalute the level of the interpolated function.
_Der : boolean
Indicator for whether to evaluate the derivative of the interpolated function.
Returns
-------
A list including the level and/or derivative of the interpolated function where requested.
| 2.749691 | 1.763249 | 1.559446 |
'''
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
'''
return self._evalOrDer(x,True,False)[0]
|
def _evaluate(self,x,return_indices = False)
|
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
| 16.143881 | 3.173795 | 5.086618 |
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
'''
y,dydx = self._evalOrDer(x,True,True)
return y,dydx
|
def _evalAndDer(self,x)
|
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
| 10.755098 | 2.721461 | 3.951958 |
'''
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
'''
if _isscalar(x):
pos = np.searchsorted(self.x_list,x)
if pos == 0:
dydx = self.coeffs[0,1]
elif (pos < self.n):
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
dydx = (self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3]))/(self.x_list[pos] - self.x_list[pos-1])
else:
alpha = x - self.x_list[self.n-1]
dydx = self.coeffs[pos,1] - self.coeffs[pos,2]*self.coeffs[pos,3]*np.exp(alpha*self.coeffs[pos,3])
else:
m = len(x)
pos = np.searchsorted(self.x_list,x)
dydx = np.zeros(m)
if dydx.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i,:]
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
dydx[in_bnds] = (coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3]))/(self.x_list[i] - self.x_list[i-1])
# Do the "out of bounds" evaluation points
dydx[out_bot] = self.coeffs[0,1]
alpha = x[out_top] - self.x_list[self.n-1]
dydx[out_top] = self.coeffs[self.n,1] - self.coeffs[self.n,2]*self.coeffs[self.n,3]*np.exp(alpha*self.coeffs[self.n,3])
return dydx
|
def _der(self,x)
|
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
| 2.128503 | 1.858144 | 1.145499 |
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
'''
if _isscalar(x):
pos = np.searchsorted(self.x_list,x)
if pos == 0:
y = self.coeffs[0,0] + self.coeffs[0,1]*(x - self.x_list[0])
dydx = self.coeffs[0,1]
elif (pos < self.n):
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
y = self.coeffs[pos,0] + alpha*(self.coeffs[pos,1] + alpha*(self.coeffs[pos,2] + alpha*self.coeffs[pos,3]))
dydx = (self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3]))/(self.x_list[pos] - self.x_list[pos-1])
else:
alpha = x - self.x_list[self.n-1]
y = self.coeffs[pos,0] + x*self.coeffs[pos,1] - self.coeffs[pos,2]*np.exp(alpha*self.coeffs[pos,3])
dydx = self.coeffs[pos,1] - self.coeffs[pos,2]*self.coeffs[pos,3]*np.exp(alpha*self.coeffs[pos,3])
else:
m = len(x)
pos = np.searchsorted(self.x_list,x)
y = np.zeros(m)
dydx = np.zeros(m)
if y.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i,:]
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
y[in_bnds] = coeffs_in[:,0] + alpha*(coeffs_in[:,1] + alpha*(coeffs_in[:,2] + alpha*coeffs_in[:,3]))
dydx[in_bnds] = (coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3]))/(self.x_list[i] - self.x_list[i-1])
# Do the "out of bounds" evaluation points
y[out_bot] = self.coeffs[0,0] + self.coeffs[0,1]*(x[out_bot] - self.x_list[0])
dydx[out_bot] = self.coeffs[0,1]
alpha = x[out_top] - self.x_list[self.n-1]
y[out_top] = self.coeffs[self.n,0] + x[out_top]*self.coeffs[self.n,1] - self.coeffs[self.n,2]*np.exp(alpha*self.coeffs[self.n,3])
dydx[out_top] = self.coeffs[self.n,1] - self.coeffs[self.n,2]*self.coeffs[self.n,3]*np.exp(alpha*self.coeffs[self.n,3])
return y, dydx
|
def _evalAndDer(self,x)
|
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
| 1.706338 | 1.535133 | 1.111525 |
'''
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
f = (
(1-alpha)*(1-beta)*self.f_values[x_pos-1,y_pos-1]
+ (1-alpha)*beta*self.f_values[x_pos-1,y_pos]
+ alpha*(1-beta)*self.f_values[x_pos,y_pos-1]
+ alpha*beta*self.f_values[x_pos,y_pos])
return f
|
def _evaluate(self,x,y)
|
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
| 1.91729 | 1.57136 | 1.220147 |
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdx = (
((1-beta)*self.f_values[x_pos,y_pos-1]
+ beta*self.f_values[x_pos,y_pos]) -
((1-beta)*self.f_values[x_pos-1,y_pos-1]
+ beta*self.f_values[x_pos-1,y_pos]))/(self.x_list[x_pos] - self.x_list[x_pos-1])
return dfdx
|
def _derX(self,x,y)
|
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
| 2.057051 | 1.701095 | 1.209251 |
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
dfdy = (
((1-alpha)*self.f_values[x_pos-1,y_pos]
+ alpha*self.f_values[x_pos,y_pos]) -
((1-alpha)*self.f_values[x_pos-1,y_pos-1]
+ alpha*self.f_values[x_pos,y_pos-1]))/(self.y_list[y_pos] - self.y_list[y_pos-1])
return dfdy
|
def _derY(self,x,y)
|
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
| 2.076699 | 1.702199 | 1.220009 |
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdx = (
( (1-beta)*(1-gamma)*self.f_values[x_pos,y_pos-1,z_pos-1]
+ (1-beta)*gamma*self.f_values[x_pos,y_pos-1,z_pos]
+ beta*(1-gamma)*self.f_values[x_pos,y_pos,z_pos-1]
+ beta*gamma*self.f_values[x_pos,y_pos,z_pos]) -
( (1-beta)*(1-gamma)*self.f_values[x_pos-1,y_pos-1,z_pos-1]
+ (1-beta)*gamma*self.f_values[x_pos-1,y_pos-1,z_pos]
+ beta*(1-gamma)*self.f_values[x_pos-1,y_pos,z_pos-1]
+ beta*gamma*self.f_values[x_pos-1,y_pos,z_pos]))/(self.x_list[x_pos] - self.x_list[x_pos-1])
return dfdx
|
def _derX(self,x,y,z)
|
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
| 1.56294 | 1.393144 | 1.12188 |
'''
Returns the level of the function at each value in x as the minimum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
'''
if _isscalar(x):
y = np.nanmin([f(x) for f in self.functions])
else:
m = len(x)
fx = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
fx[:,j] = self.functions[j](x)
y = np.nanmin(fx,axis=1)
return y
|
def _evaluate(self,x)
|
Returns the level of the function at each value in x as the minimum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
| 4.506701 | 2.135925 | 2.109953 |
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
'''
m = len(x)
fx = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
fx[:,j] = self.functions[j](x)
fx[np.isnan(fx)] = np.inf
i = np.argmin(fx,axis=1)
y = fx[np.arange(m),i]
dydx = np.zeros_like(y)
for j in range(self.funcCount):
c = i == j
dydx[c] = self.functions[j].derivative(x[c])
return y,dydx
|
def _evalAndDer(self,x)
|
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
| 3.019638 | 1.936842 | 1.559053 |
'''
Returns the first derivative of the function with respect to X at each
value in (x,y). Only called internally by HARKinterpolator2D._derX.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
dfdx = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdx[c] = self.functions[j].derivativeX(x[c],y[c])
return dfdx
|
def _derX(self,x,y)
|
Returns the first derivative of the function with respect to X at each
value in (x,y). Only called internally by HARKinterpolator2D._derX.
| 3.104772 | 2.012811 | 1.542505 |
'''
Returns the first derivative of the function with respect to Y at each
value in (x,y). Only called internally by HARKinterpolator2D._derY.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
y = temp[np.arange(m),i]
dfdy = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdy[c] = self.functions[j].derivativeY(x[c],y[c])
return dfdy
|
def _derY(self,x,y)
|
Returns the first derivative of the function with respect to Y at each
value in (x,y). Only called internally by HARKinterpolator2D._derY.
| 3.117497 | 2.100857 | 1.483917 |
'''
Returns the first derivative of the function with respect to Z at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derZ.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y,z)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
y = temp[np.arange(m),i]
dfdz = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdz[c] = self.functions[j].derivativeZ(x[c],y[c],z[c])
return dfdz
|
def _derZ(self,x,y,z)
|
Returns the first derivative of the function with respect to Z at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derZ.
| 3.059489 | 2.064882 | 1.481678 |
'''
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y), of same shape as inputs.
'''
xShift = self.lowerBound(y)
dfdx_out = self.func.derivativeX(x-xShift,y)
return dfdx_out
|
def derivativeX(self,x,y)
|
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y), of same shape as inputs.
| 4.922544 | 1.831708 | 2.687406 |
'''
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y), of same shape as inputs.
'''
xShift,xShiftDer = self.lowerBound.eval_with_derivative(y)
dfdy_out = self.func.derivativeY(x-xShift,y) - xShiftDer*self.func.derivativeX(x-xShift,y)
return dfdy_out
|
def derivativeY(self,x,y)
|
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y), of same shape as inputs.
| 4.593507 | 2.061322 | 2.228427 |
'''
Evaluate the first derivative with respect to z of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdz_out : np.array
First derivative of function with respect to the third input,
evaluated at (x,y,z), of same shape as inputs.
'''
xShift = self.lowerBound(y)
dfdz_out = self.func.derivativeZ(x-xShift,y,z)
return dfdz_out
|
def derivativeZ(self,x,y,z)
|
Evaluate the first derivative with respect to z of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdz_out : np.array
First derivative of function with respect to the third input,
evaluated at (x,y,z), of same shape as inputs.
| 4.003329 | 1.740095 | 2.300638 |
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdx = (1-alpha)*self.xInterpolators[y_pos-1]._der(x) + alpha*self.xInterpolators[y_pos]._der(x)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1,self.y_n):
c = y_pos == i
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
dfdx[c] = (1-alpha)*self.xInterpolators[i-1]._der(x[c]) + alpha*self.xInterpolators[i]._der(x[c])
return dfdx
|
def _derX(self,x,y)
|
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
| 2.207258 | 1.803396 | 1.223946 |
'''
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
f = ((1-alpha)*(1-beta)*self.xInterpolators[y_pos-1][z_pos-1](x)
+ (1-alpha)*beta*self.xInterpolators[y_pos-1][z_pos](x)
+ alpha*(1-beta)*self.xInterpolators[y_pos][z_pos-1](x)
+ alpha*beta*self.xInterpolators[y_pos][z_pos](x))
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
f[c] = (
(1-alpha)*(1-beta)*self.xInterpolators[i-1][j-1](x[c])
+ (1-alpha)*beta*self.xInterpolators[i-1][j](x[c])
+ alpha*(1-beta)*self.xInterpolators[i][j-1](x[c])
+ alpha*beta*self.xInterpolators[i][j](x[c]))
return f
|
def _evaluate(self,x,y,z)
|
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
| 1.565841 | 1.387925 | 1.128189 |
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
'''
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdy = (1-alpha)*self.xyInterpolators[z_pos-1].derivativeY(x,y) + alpha*self.xyInterpolators[z_pos].derivativeY(x,y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1,self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
dfdy[c] = (1-alpha)*self.xyInterpolators[i-1].derivativeY(x[c],y[c]) + alpha*self.xyInterpolators[i].derivativeY(x[c],y[c])
return dfdy
|
def _derY(self,x,y,z)
|
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
| 2.093736 | 1.721008 | 1.216576 |
'''
Fills in the polarity attribute of the interpolation, determining whether
the "plus" (True) or "minus" (False) solution of the system of equations
should be used for each sector. Needs to be called in __init__.
Parameters
----------
none
Returns
-------
none
'''
# Grab a point known to be inside each sector: the midway point between
# the lower left and upper right vertex of each sector
x_temp = 0.5*(self.x_values[0:(self.x_n-1),0:(self.y_n-1)] + self.x_values[1:self.x_n,1:self.y_n])
y_temp = 0.5*(self.y_values[0:(self.x_n-1),0:(self.y_n-1)] + self.y_values[1:self.x_n,1:self.y_n])
size = (self.x_n-1)*(self.y_n-1)
x_temp = np.reshape(x_temp,size)
y_temp = np.reshape(y_temp,size)
y_pos = np.tile(np.arange(0,self.y_n-1),self.x_n-1)
x_pos = np.reshape(np.tile(np.arange(0,self.x_n-1),(self.y_n-1,1)).transpose(),size)
# Set the polarity of all sectors to "plus", then test each sector
self.polarity = np.ones((self.x_n-1,self.y_n-1),dtype=bool)
alpha, beta = self.findCoords(x_temp,y_temp,x_pos,y_pos)
polarity = np.logical_and(
np.logical_and(alpha > 0, alpha < 1),
np.logical_and(beta > 0, beta < 1))
# Update polarity: if (alpha,beta) not in the unit square, then that
# sector must use the "minus" solution instead
self.polarity = np.reshape(polarity,(self.x_n-1,self.y_n-1))
|
def updatePolarity(self)
|
Fills in the polarity attribute of the interpolation, determining whether
the "plus" (True) or "minus" (False) solution of the system of equations
should be used for each sector. Needs to be called in __init__.
Parameters
----------
none
Returns
-------
none
| 2.921455 | 2.276005 | 1.283589 |
'''
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
'''
x_pos, y_pos = self.findSector(x,y)
alpha, beta = self.findCoords(x,y,x_pos,y_pos)
# Calculate the function at each point using bilinear interpolation
f = (
(1-alpha)*(1-beta)*self.f_values[x_pos,y_pos]
+ (1-alpha)*beta*self.f_values[x_pos,y_pos+1]
+ alpha*(1-beta)*self.f_values[x_pos+1,y_pos]
+ alpha*beta*self.f_values[x_pos+1,y_pos+1])
return f
|
def _evaluate(self,x,y)
|
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
| 3.395984 | 2.229809 | 1.522993 |
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
x_pos, y_pos = self.findSector(x,y)
alpha, beta = self.findCoords(x,y,x_pos,y_pos)
# Get four corners data for each point
xA = self.x_values[x_pos,y_pos]
xB = self.x_values[x_pos+1,y_pos]
xC = self.x_values[x_pos,y_pos+1]
xD = self.x_values[x_pos+1,y_pos+1]
yA = self.y_values[x_pos,y_pos]
yB = self.y_values[x_pos+1,y_pos]
yC = self.y_values[x_pos,y_pos+1]
yD = self.y_values[x_pos+1,y_pos+1]
fA = self.f_values[x_pos,y_pos]
fB = self.f_values[x_pos+1,y_pos]
fC = self.f_values[x_pos,y_pos+1]
fD = self.f_values[x_pos+1,y_pos+1]
# Calculate components of the alpha,beta --> x,y delta translation matrix
alpha_x = (1-beta)*(xB-xA) + beta*(xD-xC)
alpha_y = (1-beta)*(yB-yA) + beta*(yD-yC)
beta_x = (1-alpha)*(xC-xA) + alpha*(xD-xB)
beta_y = (1-alpha)*(yC-yA) + alpha*(yD-yB)
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x*beta_y - beta_x*alpha_y
x_alpha = beta_y/det
x_beta = -alpha_y/det
# Calculate the derivative of f w.r.t. alpha and beta
dfda = (1-beta)*(fB-fA) + beta*(fD-fC)
dfdb = (1-alpha)*(fC-fA) + alpha*(fD-fB)
# Calculate the derivative with respect to x (and return it)
dfdx = x_alpha*dfda + x_beta*dfdb
return dfdx
|
def _derX(self,x,y)
|
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
| 2.065974 | 1.817713 | 1.136578 |
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
x_pos, y_pos = self.findSector(x,y)
alpha, beta = self.findCoords(x,y,x_pos,y_pos)
# Get four corners data for each point
xA = self.x_values[x_pos,y_pos]
xB = self.x_values[x_pos+1,y_pos]
xC = self.x_values[x_pos,y_pos+1]
xD = self.x_values[x_pos+1,y_pos+1]
yA = self.y_values[x_pos,y_pos]
yB = self.y_values[x_pos+1,y_pos]
yC = self.y_values[x_pos,y_pos+1]
yD = self.y_values[x_pos+1,y_pos+1]
fA = self.f_values[x_pos,y_pos]
fB = self.f_values[x_pos+1,y_pos]
fC = self.f_values[x_pos,y_pos+1]
fD = self.f_values[x_pos+1,y_pos+1]
# Calculate components of the alpha,beta --> x,y delta translation matrix
alpha_x = (1-beta)*(xB-xA) + beta*(xD-xC)
alpha_y = (1-beta)*(yB-yA) + beta*(yD-yC)
beta_x = (1-alpha)*(xC-xA) + alpha*(xD-xB)
beta_y = (1-alpha)*(yC-yA) + alpha*(yD-yB)
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x*beta_y - beta_x*alpha_y
y_alpha = -beta_x/det
y_beta = alpha_x/det
# Calculate the derivative of f w.r.t. alpha and beta
dfda = (1-beta)*(fB-fA) + beta*(fD-fC)
dfdb = (1-alpha)*(fC-fA) + alpha*(fD-fB)
# Calculate the derivative with respect to x (and return it)
dfdy = y_alpha*dfda + y_beta*dfdb
return dfdy
|
def _derY(self,x,y)
|
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
| 2.134188 | 1.869737 | 1.141437 |
'''
A "universal distance" metric that can be used as a default in many settings.
Parameters
----------
thing_A : object
A generic object.
thing_B : object
Another generic object.
Returns:
------------
distance : float
The "distance" between thing_A and thing_B.
'''
# Get the types of the two inputs
typeA = type(thing_A)
typeB = type(thing_B)
if typeA is list and typeB is list:
lenA = len(thing_A) # If both inputs are lists, then the distance between
lenB = len(thing_B) # them is the maximum distance between corresponding
if lenA == lenB: # elements in the lists. If they differ in length,
distance_temp = [] # the distance is the difference in lengths.
for n in range(lenA):
distance_temp.append(distanceMetric(thing_A[n],thing_B[n]))
distance = max(distance_temp)
else:
distance = float(abs(lenA - lenB))
# If both inputs are numbers, return their difference
elif (typeA is int or typeB is float) and (typeB is int or typeB is float):
distance = float(abs(thing_A - thing_B))
# If both inputs are array-like, return the maximum absolute difference b/w
# corresponding elements (if same shape); return largest difference in dimensions
# if shapes do not align.
elif hasattr(thing_A,'shape') and hasattr(thing_B,'shape'):
if thing_A.shape == thing_B.shape:
distance = np.max(abs(thing_A - thing_B))
else:
distance = np.max(abs(thing_A.shape - thing_B.shape))
# If none of the above cases, but the objects are of the same class, call
# the distance method of one on the other
elif thing_A.__class__.__name__ == thing_B.__class__.__name__:
if thing_A.__class__.__name__ == 'function':
distance = 0.0
else:
distance = thing_A.distance(thing_B)
else: # Failsafe: the inputs are very far apart
distance = 1000.0
return distance
|
def distanceMetric(thing_A,thing_B)
|
A "universal distance" metric that can be used as a default in many settings.
Parameters
----------
thing_A : object
A generic object.
thing_B : object
Another generic object.
Returns:
------------
distance : float
The "distance" between thing_A and thing_B.
| 2.939491 | 2.58688 | 1.136307 |
'''
Solve the dynamic model for one agent type. This function iterates on "cycles"
of an agent's model either a given number of times or until solution convergence
if an infinite horizon model is used (with agent.cycles = 0).
Parameters
----------
agent : AgentType
The microeconomic AgentType whose dynamic problem is to be solved.
verbose : boolean
If True, solution progress is printed to screen (when cycles != 1).
Returns
-------
solution : [Solution]
A list of solutions to the one period problems that the agent will
encounter in his "lifetime". Returns in reverse chronological order.
'''
# Record the flow of time when the Agent began the process, and make sure time is flowing backwards
original_time_flow = agent.time_flow
agent.timeRev()
# Check to see whether this is an (in)finite horizon problem
cycles_left = agent.cycles
infinite_horizon = cycles_left == 0
# Initialize the solution, which includes the terminal solution if it's not a pseudo-terminal period
solution = []
if not agent.pseudo_terminal:
solution.append(deepcopy(agent.solution_terminal))
# Initialize the process, then loop over cycles
solution_last = agent.solution_terminal
go = True
completed_cycles = 0
max_cycles = 5000 # escape clause
if verbose:
t_last = clock()
while go:
# Solve a cycle of the model, recording it if horizon is finite
solution_cycle = solveOneCycle(agent,solution_last)
if not infinite_horizon:
solution += solution_cycle
# Check for termination: identical solutions across cycle iterations or run out of cycles
solution_now = solution_cycle[-1]
if infinite_horizon:
if completed_cycles > 0:
solution_distance = solution_now.distance(solution_last)
go = (solution_distance > agent.tolerance and completed_cycles < max_cycles)
else: # Assume solution does not converge after only one cycle
solution_distance = 100.0
go = True
else:
cycles_left += -1
go = cycles_left > 0
# Update the "last period solution"
solution_last = solution_now
completed_cycles += 1
# Display progress if requested
if verbose:
t_now = clock()
if infinite_horizon:
print('Finished cycle #' + str(completed_cycles) + ' in ' + str(t_now-t_last) +\
' seconds, solution distance = ' + str(solution_distance))
else:
print('Finished cycle #' + str(completed_cycles) + ' of ' + str(agent.cycles) +\
' in ' + str(t_now-t_last) + ' seconds.')
t_last = t_now
# Record the last cycle if horizon is infinite (solution is still empty!)
if infinite_horizon:
solution = solution_cycle # PseudoTerminal=False impossible for infinite horizon
# Restore the direction of time to its original orientation, then return the solution
if original_time_flow:
agent.timeFwd()
return solution
|
def solveAgent(agent,verbose)
|
Solve the dynamic model for one agent type. This function iterates on "cycles"
of an agent's model either a given number of times or until solution convergence
if an infinite horizon model is used (with agent.cycles = 0).
Parameters
----------
agent : AgentType
The microeconomic AgentType whose dynamic problem is to be solved.
verbose : boolean
If True, solution progress is printed to screen (when cycles != 1).
Returns
-------
solution : [Solution]
A list of solutions to the one period problems that the agent will
encounter in his "lifetime". Returns in reverse chronological order.
| 5.233984 | 3.38791 | 1.544901 |
'''
Solve one "cycle" of the dynamic model for one agent type. This function
iterates over the periods within an agent's cycle, updating the time-varying
parameters and passing them to the single period solver(s).
Parameters
----------
agent : AgentType
The microeconomic AgentType whose dynamic problem is to be solved.
solution_last : Solution
A representation of the solution of the period that comes after the
end of the sequence of one period problems. This might be the term-
inal period solution, a "pseudo terminal" solution, or simply the
solution to the earliest period from the succeeding cycle.
Returns
-------
solution_cycle : [Solution]
A list of one period solutions for one "cycle" of the AgentType's
microeconomic model. Returns in reverse chronological order.
'''
# Calculate number of periods per cycle, defaults to 1 if all variables are time invariant
if len(agent.time_vary) > 0:
name = agent.time_vary[0]
T = len(eval('agent.' + name))
else:
T = 1
# Check whether the same solution method is used in all periods
always_same_solver = 'solveOnePeriod' not in agent.time_vary
if always_same_solver:
solveOnePeriod = agent.solveOnePeriod
these_args = getArgNames(solveOnePeriod)
# Construct a dictionary to be passed to the solver
time_inv_string = ''
for name in agent.time_inv:
time_inv_string += ' \'' + name + '\' : agent.' +name + ','
time_vary_string = ''
for name in agent.time_vary:
time_vary_string += ' \'' + name + '\' : None,'
solve_dict = eval('{' + time_inv_string + time_vary_string + '}')
# Initialize the solution for this cycle, then iterate on periods
solution_cycle = []
solution_next = solution_last
for t in range(T):
# Update which single period solver to use (if it depends on time)
if not always_same_solver:
solveOnePeriod = agent.solveOnePeriod[t]
these_args = getArgNames(solveOnePeriod)
# Update time-varying single period inputs
for name in agent.time_vary:
if name in these_args:
solve_dict[name] = eval('agent.' + name + '[t]')
solve_dict['solution_next'] = solution_next
# Make a temporary dictionary for this period
temp_dict = {name: solve_dict[name] for name in these_args}
# Solve one period, add it to the solution, and move to the next period
solution_t = solveOnePeriod(**temp_dict)
solution_cycle.append(solution_t)
solution_next = solution_t
# Return the list of per-period solutions
return solution_cycle
|
def solveOneCycle(agent,solution_last)
|
Solve one "cycle" of the dynamic model for one agent type. This function
iterates over the periods within an agent's cycle, updating the time-varying
parameters and passing them to the single period solver(s).
Parameters
----------
agent : AgentType
The microeconomic AgentType whose dynamic problem is to be solved.
solution_last : Solution
A representation of the solution of the period that comes after the
end of the sequence of one period problems. This might be the term-
inal period solution, a "pseudo terminal" solution, or simply the
solution to the earliest period from the succeeding cycle.
Returns
-------
solution_cycle : [Solution]
A list of one period solutions for one "cycle" of the AgentType's
microeconomic model. Returns in reverse chronological order.
| 4.478089 | 2.49528 | 1.794624 |
'''
Helper function for copy_module_to_local(). Provides the actual copy
functionality, with highly cautious safeguards against copying over
important things.
Parameters
----------
target_path : string
String, file path to target location
my_directory_full_path: string
String, full pathname to this file's directory
my_module : string
String, name of the module to copy
Returns
-------
none
'''
if target_path == 'q' or target_path == 'Q':
print("Goodbye!")
return
elif target_path == os.path.expanduser("~") or os.path.normpath(target_path) == os.path.expanduser("~"):
print("You have indicated that the target location is "+target_path+" -- that is, you want to wipe out your home directory with the contents of "+my_module+". My programming does not allow me to do that.\n\nGoodbye!")
return
elif os.path.exists(target_path):
print("There is already a file or directory at the location "+target_path+". For safety reasons this code does not overwrite existing files.\nPlease remove the file at "+target_path+" and try again.")
return
else:
user_input = input(+ my_module
+ + target_path +)
if user_input == 'y' or user_input == 'Y':
#print("copy_tree(",my_directory_full_path,",", target_path,")")
copy_tree(my_directory_full_path, target_path)
else:
print("Goodbye!")
return
|
def copy_module(target_path, my_directory_full_path, my_module)
|
Helper function for copy_module_to_local(). Provides the actual copy
functionality, with highly cautious safeguards against copying over
important things.
Parameters
----------
target_path : string
String, file path to target location
my_directory_full_path: string
String, full pathname to this file's directory
my_module : string
String, name of the module to copy
Returns
-------
none
| 4.465725 | 2.878664 | 1.551319 |
'''
A generic distance method, which requires the existence of an attribute
called distance_criteria, giving a list of strings naming the attributes
to be considered by the distance metric.
Parameters
----------
other : object
Another object to compare this instance to.
Returns
-------
(unnamed) : float
The distance between this object and another, using the "universal
distance" metric.
'''
distance_list = [0.0]
for attr_name in self.distance_criteria:
try:
obj_A = getattr(self,attr_name)
obj_B = getattr(other,attr_name)
distance_list.append(distanceMetric(obj_A,obj_B))
except:
distance_list.append(1000.0) # if either object lacks attribute, they are not the same
return max(distance_list)
|
def distance(self,other)
|
A generic distance method, which requires the existence of an attribute
called distance_criteria, giving a list of strings naming the attributes
to be considered by the distance metric.
Parameters
----------
other : object
Another object to compare this instance to.
Returns
-------
(unnamed) : float
The distance between this object and another, using the "universal
distance" metric.
| 4.265144 | 1.953135 | 2.183743 |
'''
Assign an arbitrary number of attributes to this agent.
Parameters
----------
**kwds : keyword arguments
Any number of keyword arguments of the form key=value. Each value
will be assigned to the attribute named in self.
Returns
-------
none
'''
for key in kwds:
setattr(self,key,kwds[key])
|
def assignParameters(self,**kwds)
|
Assign an arbitrary number of attributes to this agent.
Parameters
----------
**kwds : keyword arguments
Any number of keyword arguments of the form key=value. Each value
will be assigned to the attribute named in self.
Returns
-------
none
| 4.913225 | 1.625431 | 3.022722 |
'''
Calculates the average of an attribute of this instance. Returns NaN if no such attribute.
Parameters
----------
varname : string
The name of the attribute whose average is to be calculated. This attribute must be an
np.array or other class compatible with np.mean.
Returns
-------
avg : float or np.array
The average of this attribute. Might be an array if the axis keyword is passed.
'''
if hasattr(self,varname):
return np.mean(getattr(self,varname),**kwds)
else:
return np.nan
|
def getAvg(self,varname,**kwds)
|
Calculates the average of an attribute of this instance. Returns NaN if no such attribute.
Parameters
----------
varname : string
The name of the attribute whose average is to be calculated. This attribute must be an
np.array or other class compatible with np.mean.
Returns
-------
avg : float or np.array
The average of this attribute. Might be an array if the axis keyword is passed.
| 4.150791 | 1.425332 | 2.912156 |
'''
Reverse the flow of time for this instance.
Parameters
----------
none
Returns
-------
none
'''
for name in self.time_vary:
exec('self.' + name + '.reverse()')
self.time_flow = not self.time_flow
|
def timeFlip(self)
|
Reverse the flow of time for this instance.
Parameters
----------
none
Returns
-------
none
| 6.12271 | 3.484853 | 1.75695 |
'''
Adds any number of parameters to time_vary for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be added to time_vary
Returns
-------
None
'''
for param in params:
if param not in self.time_vary:
self.time_vary.append(param)
|
def addToTimeVary(self,*params)
|
Adds any number of parameters to time_vary for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be added to time_vary
Returns
-------
None
| 3.727321 | 1.692169 | 2.202688 |
'''
Adds any number of parameters to time_inv for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be added to time_inv
Returns
-------
None
'''
for param in params:
if param not in self.time_inv:
self.time_inv.append(param)
|
def addToTimeInv(self,*params)
|
Adds any number of parameters to time_inv for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be added to time_inv
Returns
-------
None
| 3.804107 | 1.692532 | 2.247583 |
'''
Removes any number of parameters from time_vary for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be removed from time_vary
Returns
-------
None
'''
for param in params:
if param in self.time_vary:
self.time_vary.remove(param)
|
def delFromTimeVary(self,*params)
|
Removes any number of parameters from time_vary for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be removed from time_vary
Returns
-------
None
| 4.143197 | 1.692425 | 2.448083 |
'''
Removes any number of parameters from time_inv for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be removed from time_inv
Returns
-------
None
'''
for param in params:
if param in self.time_inv:
self.time_inv.remove(param)
|
def delFromTimeInv(self,*params)
|
Removes any number of parameters from time_inv for this instance.
Parameters
----------
params : string
Any number of strings naming attributes to be removed from time_inv
Returns
-------
None
| 4.429072 | 1.739914 | 2.54557 |
'''
Solve the model for this instance of an agent type by backward induction.
Loops through the sequence of one period problems, passing the solution
from period t+1 to the problem for period t.
Parameters
----------
verbose : boolean
If True, solution progress is printed to screen.
Returns
-------
none
'''
# Ignore floating point "errors". Numpy calls it "errors", but really it's excep-
# tions with well-defined answers such as 1.0/0.0 that is np.inf, -1.0/0.0 that is
# -np.inf, np.inf/np.inf is np.nan and so on.
with np.errstate(divide='ignore', over='ignore', under='ignore', invalid='ignore'):
self.preSolve() # Do pre-solution stuff
self.solution = solveAgent(self,verbose) # Solve the model by backward induction
if self.time_flow: # Put the solution in chronological order if this instance's time flow runs that way
self.solution.reverse()
self.addToTimeVary('solution') # Add solution to the list of time-varying attributes
self.postSolve()
|
def solve(self,verbose=False)
|
Solve the model for this instance of an agent type by backward induction.
Loops through the sequence of one period problems, passing the solution
from period t+1 to the problem for period t.
Parameters
----------
verbose : boolean
If True, solution progress is printed to screen.
Returns
-------
none
| 8.145717 | 4.598492 | 1.771389 |
for param in self.time_vary:
assert type(getattr(self,param))==list,param + ' is not a list, but should be' + \
' because it is in time_vary'
|
def checkElementsOfTimeVaryAreLists(self)
|
A method to check that elements of time_vary are lists.
| 7.254548 | 5.787442 | 1.253498 |
'''
Prepares this AgentType for a new simulation. Resets the internal random number generator,
makes initial states for all agents (using simBirth), clears histories of tracked variables.
Parameters
----------
None
Returns
-------
None
'''
self.resetRNG()
self.t_sim = 0
all_agents = np.ones(self.AgentCount,dtype=bool)
blank_array = np.zeros(self.AgentCount)
for var_name in self.poststate_vars:
exec('self.' + var_name + ' = copy(blank_array)')
self.t_age = np.zeros(self.AgentCount,dtype=int) # Number of periods since agent entry
self.t_cycle = np.zeros(self.AgentCount,dtype=int) # Which cycle period each agent is on
self.simBirth(all_agents)
self.clearHistory()
return None
|
def initializeSim(self)
|
Prepares this AgentType for a new simulation. Resets the internal random number generator,
makes initial states for all agents (using simBirth), clears histories of tracked variables.
Parameters
----------
None
Returns
-------
None
| 5.912399 | 3.182579 | 1.857738 |
'''
Simulates one period for this type. Calls the methods getMortality(), getShocks() or
readShocks, getStates(), getControls(), and getPostStates(). These should be defined for
AgentType subclasses, except getMortality (define its components simDeath and simBirth
instead) and readShocks.
Parameters
----------
None
Returns
-------
None
'''
self.getMortality() # Replace some agents with "newborns"
if self.read_shocks: # If shock histories have been pre-specified, use those
self.readShocks()
else: # Otherwise, draw shocks as usual according to subclass-specific method
self.getShocks()
self.getStates() # Determine each agent's state at decision time
self.getControls() # Determine each agent's choice or control variables based on states
self.getPostStates() # Determine each agent's post-decision / end-of-period states using states and controls
# Advance time for all agents
self.t_age = self.t_age + 1 # Age all consumers by one period
self.t_cycle = self.t_cycle + 1 # Age all consumers within their cycle
self.t_cycle[self.t_cycle == self.T_cycle] = 0
|
def simOnePeriod(self)
|
Simulates one period for this type. Calls the methods getMortality(), getShocks() or
readShocks, getStates(), getControls(), and getPostStates(). These should be defined for
AgentType subclasses, except getMortality (define its components simDeath and simBirth
instead) and readShocks.
Parameters
----------
None
Returns
-------
None
| 7.538643 | 3.513951 | 2.145347 |
'''
Makes a pre-specified history of shocks for the simulation. Shock variables should be named
in self.shock_vars, a list of strings that is subclass-specific. This method runs a subset
of the standard simulation loop by simulating only mortality and shocks; each variable named
in shock_vars is stored in a T_sim x AgentCount array in an attribute of self named X_hist.
Automatically sets self.read_shocks to True so that these pre-specified shocks are used for
all subsequent calls to simulate().
Parameters
----------
None
Returns
-------
None
'''
# Make sure time is flowing forward and re-initialize the simulation
orig_time = self.time_flow
self.timeFwd()
self.initializeSim()
# Make blank history arrays for each shock variable
for var_name in self.shock_vars:
setattr(self,var_name+'_hist',np.zeros((self.T_sim,self.AgentCount))+np.nan)
# Make and store the history of shocks for each period
for t in range(self.T_sim):
self.getMortality()
self.getShocks()
for var_name in self.shock_vars:
exec('self.' + var_name + '_hist[self.t_sim,:] = self.' + var_name)
self.t_sim += 1
self.t_age = self.t_age + 1 # Age all consumers by one period
self.t_cycle = self.t_cycle + 1 # Age all consumers within their cycle
self.t_cycle[self.t_cycle == self.T_cycle] = 0 # Resetting to zero for those who have reached the end
# Restore the flow of time and flag that shocks can be read rather than simulated
self.read_shocks = True
if not orig_time:
self.timeRev()
|
def makeShockHistory(self)
|
Makes a pre-specified history of shocks for the simulation. Shock variables should be named
in self.shock_vars, a list of strings that is subclass-specific. This method runs a subset
of the standard simulation loop by simulating only mortality and shocks; each variable named
in shock_vars is stored in a T_sim x AgentCount array in an attribute of self named X_hist.
Automatically sets self.read_shocks to True so that these pre-specified shocks are used for
all subsequent calls to simulate().
Parameters
----------
None
Returns
-------
None
| 5.495009 | 2.587573 | 2.123615 |
'''
Determines which agents in the current population "die" or should be replaced. Takes no
inputs, returns a Boolean array of size self.AgentCount, which has True for agents who die
and False for those that survive. Returns all False by default, must be overwritten by a
subclass to have replacement events.
Parameters
----------
None
Returns
-------
who_dies : np.array
Boolean array of size self.AgentCount indicating which agents die and are replaced.
'''
print('AgentType subclass must define method simDeath!')
who_dies = np.ones(self.AgentCount,dtype=bool)
return who_dies
|
def simDeath(self)
|
Determines which agents in the current population "die" or should be replaced. Takes no
inputs, returns a Boolean array of size self.AgentCount, which has True for agents who die
and False for those that survive. Returns all False by default, must be overwritten by a
subclass to have replacement events.
Parameters
----------
None
Returns
-------
who_dies : np.array
Boolean array of size self.AgentCount indicating which agents die and are replaced.
| 8.26802 | 1.811055 | 4.565306 |
'''
Reads values of shock variables for the current period from history arrays. For each var-
iable X named in self.shock_vars, this attribute of self is set to self.X_hist[self.t_sim,:].
This method is only ever called if self.read_shocks is True. This can be achieved by using
the method makeShockHistory() (or manually after storing a "handcrafted" shock history).
Parameters
----------
None
Returns
-------
None
'''
for var_name in self.shock_vars:
setattr(self,var_name,getattr(self,var_name+'_hist')[self.t_sim,:])
|
def readShocks(self)
|
Reads values of shock variables for the current period from history arrays. For each var-
iable X named in self.shock_vars, this attribute of self is set to self.X_hist[self.t_sim,:].
This method is only ever called if self.read_shocks is True. This can be achieved by using
the method makeShockHistory() (or manually after storing a "handcrafted" shock history).
Parameters
----------
None
Returns
-------
None
| 7.734284 | 1.403207 | 5.511864 |
'''
Simulates this agent type for a given number of periods (defaults to self.T_sim if no input).
Records histories of attributes named in self.track_vars in attributes named varname_hist.
Parameters
----------
None
Returns
-------
None
'''
# Ignore floating point "errors". Numpy calls it "errors", but really it's excep-
# tions with well-defined answers such as 1.0/0.0 that is np.inf, -1.0/0.0 that is
# -np.inf, np.inf/np.inf is np.nan and so on.
with np.errstate(divide='ignore', over='ignore', under='ignore', invalid='ignore'):
orig_time = self.time_flow
self.timeFwd()
if sim_periods is None:
sim_periods = self.T_sim
for t in range(sim_periods):
self.simOnePeriod()
for var_name in self.track_vars:
exec('self.' + var_name + '_hist[self.t_sim,:] = self.' + var_name)
self.t_sim += 1
if not orig_time:
self.timeRev()
|
def simulate(self,sim_periods=None)
|
Simulates this agent type for a given number of periods (defaults to self.T_sim if no input).
Records histories of attributes named in self.track_vars in attributes named varname_hist.
Parameters
----------
None
Returns
-------
None
| 5.744282 | 3.566789 | 1.610491 |
'''
Solves the microeconomic problem for all AgentTypes in this market.
Parameters
----------
None
Returns
-------
None
'''
#for this_type in self.agents:
# this_type.solve()
try:
multiThreadCommands(self.agents,['solve()'])
except Exception as err:
if self.print_parallel_error_once:
# Set flag to False so this is only printed once.
self.print_parallel_error_once = False
print("**** WARNING: could not execute multiThreadCommands in HARK.core.Market.solveAgents(), so using the serial version instead. This will likely be slower. The multiTreadCommands() functions failed with the following error:", '\n ', sys.exc_info()[0], ':', err) #sys.exc_info()[0])
multiThreadCommandsFake(self.agents,['solve()'])
|
def solveAgents(self)
|
Solves the microeconomic problem for all AgentTypes in this market.
Parameters
----------
None
Returns
-------
None
| 7.422371 | 5.902869 | 1.257418 |
'''
"Solves" the market by finding a "dynamic rule" that governs the aggregate
market state such that when agents believe in these dynamics, their actions
collectively generate the same dynamic rule.
Parameters
----------
None
Returns
-------
None
'''
go = True
max_loops = self.max_loops # Failsafe against infinite solution loop
completed_loops = 0
old_dynamics = None
while go: # Loop until the dynamic process converges or we hit the loop cap
self.solveAgents() # Solve each AgentType's micro problem
self.makeHistory() # "Run" the model while tracking aggregate variables
new_dynamics = self.updateDynamics() # Find a new aggregate dynamic rule
# Check to see if the dynamic rule has converged (if this is not the first loop)
if completed_loops > 0:
distance = new_dynamics.distance(old_dynamics)
else:
distance = 1000000.0
# Move to the next loop if the terminal conditions are not met
old_dynamics = new_dynamics
completed_loops += 1
go = distance >= self.tolerance and completed_loops < max_loops
self.dynamics = new_dynamics
|
def solve(self)
|
"Solves" the market by finding a "dynamic rule" that governs the aggregate
market state such that when agents believe in these dynamics, their actions
collectively generate the same dynamic rule.
Parameters
----------
None
Returns
-------
None
| 8.264059 | 4.911117 | 1.682725 |
'''
Collects attributes named in reap_vars from each AgentType in the market,
storing them in respectively named attributes of self.
Parameters
----------
none
Returns
-------
none
'''
for var_name in self.reap_vars:
harvest = []
for this_type in self.agents:
harvest.append(getattr(this_type,var_name))
setattr(self,var_name,harvest)
|
def reap(self)
|
Collects attributes named in reap_vars from each AgentType in the market,
storing them in respectively named attributes of self.
Parameters
----------
none
Returns
-------
none
| 5.963048 | 2.085916 | 2.858719 |
'''
Distributes attrributes named in sow_vars from self to each AgentType
in the market, storing them in respectively named attributes.
Parameters
----------
none
Returns
-------
none
'''
for var_name in self.sow_vars:
this_seed = getattr(self,var_name)
for this_type in self.agents:
setattr(this_type,var_name,this_seed)
|
def sow(self)
|
Distributes attrributes named in sow_vars from self to each AgentType
in the market, storing them in respectively named attributes.
Parameters
----------
none
Returns
-------
none
| 7.187641 | 2.198314 | 3.269615 |
'''
Processes the variables collected from agents using the function millRule,
storing the results in attributes named in aggr_sow.
Parameters
----------
none
Returns
-------
none
'''
# Make a dictionary of inputs for the millRule
reap_vars_string = ''
for name in self.reap_vars:
reap_vars_string += ' \'' + name + '\' : self.' + name + ','
const_vars_string = ''
for name in self.const_vars:
const_vars_string += ' \'' + name + '\' : self.' + name + ','
mill_dict = eval('{' + reap_vars_string + const_vars_string + '}')
# Run the millRule and store its output in self
product = self.millRule(**mill_dict)
for j in range(len(self.sow_vars)):
this_var = self.sow_vars[j]
this_product = getattr(product,this_var)
setattr(self,this_var,this_product)
|
def mill(self)
|
Processes the variables collected from agents using the function millRule,
storing the results in attributes named in aggr_sow.
Parameters
----------
none
Returns
-------
none
| 4.414976 | 2.624123 | 1.682458 |
'''
Reset the state of the market (attributes in sow_vars, etc) to some
user-defined initial state, and erase the histories of tracked variables.
Parameters
----------
none
Returns
-------
none
'''
for var_name in self.track_vars: # Reset the history of tracked variables
setattr(self,var_name + '_hist',[])
for var_name in self.sow_vars: # Set the sow variables to their initial levels
initial_val = getattr(self,var_name + '_init')
setattr(self,var_name,initial_val)
for this_type in self.agents: # Reset each AgentType in the market
this_type.reset()
|
def reset(self)
|
Reset the state of the market (attributes in sow_vars, etc) to some
user-defined initial state, and erase the histories of tracked variables.
Parameters
----------
none
Returns
-------
none
| 6.455507 | 2.844719 | 2.269295 |
'''
Record the current value of each variable X named in track_vars in an
attribute named X_hist.
Parameters
----------
none
Returns
-------
none
'''
for var_name in self.track_vars:
value_now = getattr(self,var_name)
getattr(self,var_name + '_hist').append(value_now)
|
def store(self)
|
Record the current value of each variable X named in track_vars in an
attribute named X_hist.
Parameters
----------
none
Returns
-------
none
| 6.247194 | 2.0619 | 3.029823 |
'''
Runs a loop of sow-->cultivate-->reap-->mill act_T times, tracking the
evolution of variables X named in track_vars in attributes named X_hist.
Parameters
----------
none
Returns
-------
none
'''
self.reset() # Initialize the state of the market
for t in range(self.act_T):
self.sow() # Distribute aggregated information/state to agents
self.cultivate() # Agents take action
self.reap() # Collect individual data from agents
self.mill() # Process individual data into aggregate data
self.store()
|
def makeHistory(self)
|
Runs a loop of sow-->cultivate-->reap-->mill act_T times, tracking the
evolution of variables X named in track_vars in attributes named X_hist.
Parameters
----------
none
Returns
-------
none
| 15.535761 | 3.682946 | 4.218297 |
'''
Calculates a new "aggregate dynamic rule" using the history of variables
named in track_vars, and distributes this rule to AgentTypes in agents.
Parameters
----------
none
Returns
-------
dynamics : instance
The new "aggregate dynamic rule" that agents believe in and act on.
Should have attributes named in dyn_vars.
'''
# Make a dictionary of inputs for the dynamics calculator
history_vars_string = ''
arg_names = list(getArgNames(self.calcDynamics))
if 'self' in arg_names:
arg_names.remove('self')
for name in arg_names:
history_vars_string += ' \'' + name + '\' : self.' + name + '_hist,'
update_dict = eval('{' + history_vars_string + '}')
# Calculate a new dynamic rule and distribute it to the agents in agent_list
dynamics = self.calcDynamics(**update_dict) # User-defined dynamics calculator
for var_name in self.dyn_vars:
this_obj = getattr(dynamics,var_name)
for this_type in self.agents:
setattr(this_type,var_name,this_obj)
return dynamics
|
def updateDynamics(self)
|
Calculates a new "aggregate dynamic rule" using the history of variables
named in track_vars, and distributes this rule to AgentTypes in agents.
Parameters
----------
none
Returns
-------
dynamics : instance
The new "aggregate dynamic rule" that agents believe in and act on.
Should have attributes named in dyn_vars.
| 6.426276 | 2.910115 | 2.208255 |
'''
Returns a list of strings naming all of the arguments for the passed function.
Parameters
----------
function : function
A function whose argument names are wanted.
Returns
-------
argNames : [string]
The names of the arguments of function.
'''
argCount = function.__code__.co_argcount
argNames = function.__code__.co_varnames[:argCount]
return argNames
|
def getArgNames(function)
|
Returns a list of strings naming all of the arguments for the passed function.
Parameters
----------
function : function
A function whose argument names are wanted.
Returns
-------
argNames : [string]
The names of the arguments of function.
| 3.052374 | 1.701506 | 1.793925 |
'''
Evaluates constant relative risk aversion (CRRA) utility of consumption c
given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Utility
Tests
-----
Test a value which should pass:
>>> c, gamma = 1.0, 2.0 # Set two values at once with Python syntax
>>> utility(c=c, gam=gamma)
-1.0
'''
if gam == 1:
return np.log(c)
else:
return( c**(1.0 - gam) / (1.0 - gam) )
|
def CRRAutility(c, gam)
|
Evaluates constant relative risk aversion (CRRA) utility of consumption c
given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Utility
Tests
-----
Test a value which should pass:
>>> c, gamma = 1.0, 2.0 # Set two values at once with Python syntax
>>> utility(c=c, gam=gamma)
-1.0
| 5.49182 | 1.649775 | 3.32883 |
'''
Evaluates the inverse of the CRRA utility function (with risk aversion para-
meter gam) at a given utility level u.
Parameters
----------
u : float
Utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Consumption corresponding to given utility value
'''
if gam == 1:
return np.exp(u)
else:
return( ((1.0-gam)*u)**(1/(1.0-gam)) )
|
def CRRAutility_inv(u, gam)
|
Evaluates the inverse of the CRRA utility function (with risk aversion para-
meter gam) at a given utility level u.
Parameters
----------
u : float
Utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Consumption corresponding to given utility value
| 5.801214 | 2.105465 | 2.755312 |
'''
Evaluates the derivative of the inverse of the CRRA utility function (with
risk aversion parameter gam) at a given utility level u.
Parameters
----------
u : float
Utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal consumption corresponding to given utility value
'''
if gam == 1:
return np.exp(u)
else:
return( ((1.0-gam)*u)**(gam/(1.0-gam)) )
|
def CRRAutility_invP(u, gam)
|
Evaluates the derivative of the inverse of the CRRA utility function (with
risk aversion parameter gam) at a given utility level u.
Parameters
----------
u : float
Utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal consumption corresponding to given utility value
| 5.044789 | 2.075878 | 2.430195 |
'''
Calculate a discrete approximation to a mean one lognormal distribution.
Based on function approxLognormal; see that function's documentation for
further notes.
Parameters
----------
N : int
Size of discrete space vector to be returned.
sigma : float
standard deviation associated with underlying normal probability distribution.
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Nathan M. Palmer
Based on Matab function "setup_shocks.m," from Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 01 May 2015
'''
mu_adj = - 0.5*sigma**2;
pmf,X = approxLognormal(N=N, mu=mu_adj, sigma=sigma, **kwargs)
return [pmf,X]
|
def approxMeanOneLognormal(N, sigma=1.0, **kwargs)
|
Calculate a discrete approximation to a mean one lognormal distribution.
Based on function approxLognormal; see that function's documentation for
further notes.
Parameters
----------
N : int
Size of discrete space vector to be returned.
sigma : float
standard deviation associated with underlying normal probability distribution.
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Nathan M. Palmer
Based on Matab function "setup_shocks.m," from Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 01 May 2015
| 9.768511 | 1.565536 | 6.239721 |
'''
Calculate a discrete approximation to the beta distribution. May be quite
slow, as it uses a rudimentary numeric integration method to generate the
discrete approximation.
Parameters
----------
N : int
Size of discrete space vector to be returned.
a : float
First shape parameter (sometimes called alpha).
b : float
Second shape parameter (sometimes called beta).
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
'''
P = 1000
vals = np.reshape(stats.beta.ppf(np.linspace(0.0,1.0,N*P),a,b),(N,P))
X = np.mean(vals,axis=1)
pmf = np.ones(N)/float(N)
return( [pmf, X] )
|
def approxBeta(N,a=1.0,b=1.0)
|
Calculate a discrete approximation to the beta distribution. May be quite
slow, as it uses a rudimentary numeric integration method to generate the
discrete approximation.
Parameters
----------
N : int
Size of discrete space vector to be returned.
a : float
First shape parameter (sometimes called alpha).
b : float
Second shape parameter (sometimes called beta).
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
| 4.770117 | 2.050492 | 2.326329 |
'''
Makes a discrete approximation to a uniform distribution, given its bottom
and top limits and number of points.
Parameters
----------
N : int
The number of points in the discrete approximation
bot : float
The bottom of the uniform distribution
top : float
The top of the uniform distribution
Returns
-------
(unnamed) : np.array
An equiprobable discrete approximation to the uniform distribution.
'''
pmf = np.ones(N)/float(N)
center = (top+bot)/2.0
width = (top-bot)/2.0
X = center + width*np.linspace(-(N-1.0)/2.0,(N-1.0)/2.0,N)/(N/2.0)
return [pmf,X]
|
def approxUniform(N,bot=0.0,top=1.0)
|
Makes a discrete approximation to a uniform distribution, given its bottom
and top limits and number of points.
Parameters
----------
N : int
The number of points in the discrete approximation
bot : float
The bottom of the uniform distribution
top : float
The top of the uniform distribution
Returns
-------
(unnamed) : np.array
An equiprobable discrete approximation to the uniform distribution.
| 3.63045 | 2.064154 | 1.758808 |
'''
Creates an approximation to a normal distribution with mean mu and standard
deviation sigma, returning a stochastic vector called p_vec, corresponding
to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation
of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).
Parameters
----------
x_grid: numpy.array
A sorted 1D array of floats representing discrete values that a normally
distributed RV could take on.
mu: float
Mean of the normal distribution to be approximated.
sigma: float
Standard deviation of the normal distribution to be approximated.
K: int
Number of points in the normal distribution to sample.
bound: float
Truncation bound of the normal distribution, as +/- bound*sigma.
Returns
-------
p_vec: numpy.array
A stochastic vector with probability weights for each x in x_grid.
'''
x_n = x_grid.size # Number of points in the outcome grid
lower_bound = -bound # Lower bound of normal draws to consider, in SD
upper_bound = bound # Upper bound of normal draws to consider, in SD
raw_sample = np.linspace(lower_bound,upper_bound,K) # Evenly spaced draws between bounds
f_weights = stats.norm.pdf(raw_sample) # Relative probability of each draw
sample = mu + sigma*raw_sample # Adjusted bounds, given mean and stdev
w_vec = np.zeros(x_n) # A vector of outcome weights
# Find the relative position of each of the draws
sample_pos = np.searchsorted(x_grid,sample)
sample_pos[sample_pos < 1] = 1
sample_pos[sample_pos > x_n-1] = x_n-1
# Make arrays of the x_grid point directly above and below each draw
bot = x_grid[sample_pos-1]
top = x_grid[sample_pos]
alpha = (sample-bot)/(top-bot)
# Keep the weights (alpha) in bounds
alpha_clipped = np.clip(alpha,0.,1.)
# Loop through each x_grid point and add up the probability that each nearby
# draw contributes to it (accounting for distance)
for j in range(1,x_n):
c = sample_pos == j
w_vec[j-1] = w_vec[j-1] + np.dot(f_weights[c],1.0-alpha_clipped[c])
w_vec[j] = w_vec[j] + np.dot(f_weights[c],alpha_clipped[c])
# Reweight the probabilities so they sum to 1
W = np.sum(w_vec)
p_vec = w_vec/W
# Check for obvious errors, and return p_vec
assert (np.all(p_vec>=0.)) and (np.all(p_vec<=1.)) and (np.isclose(np.sum(p_vec),1.))
return p_vec
|
def makeMarkovApproxToNormal(x_grid,mu,sigma,K=351,bound=3.5)
|
Creates an approximation to a normal distribution with mean mu and standard
deviation sigma, returning a stochastic vector called p_vec, corresponding
to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation
of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).
Parameters
----------
x_grid: numpy.array
A sorted 1D array of floats representing discrete values that a normally
distributed RV could take on.
mu: float
Mean of the normal distribution to be approximated.
sigma: float
Standard deviation of the normal distribution to be approximated.
K: int
Number of points in the normal distribution to sample.
bound: float
Truncation bound of the normal distribution, as +/- bound*sigma.
Returns
-------
p_vec: numpy.array
A stochastic vector with probability weights for each x in x_grid.
| 3.701686 | 2.424789 | 1.526602 |
'''
Creates an approximation to a normal distribution with mean mu and standard
deviation sigma, by Monte Carlo.
Returns a stochastic vector called p_vec, corresponding
to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation
of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).
Parameters
----------
x_grid: numpy.array
A sorted 1D array of floats representing discrete values that a normally
distributed RV could take on.
mu: float
Mean of the normal distribution to be approximated.
sigma: float
Standard deviation of the normal distribution to be approximated.
N_draws: int
Number of draws to use in Monte Carlo.
Returns
-------
p_vec: numpy.array
A stochastic vector with probability weights for each x in x_grid.
'''
# Take random draws from the desired normal distribution
random_draws = np.random.normal(loc = mu, scale = sigma, size = N_draws)
# Compute the distance between the draws and points in x_grid
distance = np.abs(x_grid[:,np.newaxis] - random_draws[np.newaxis,:])
# Find the indices of the points in x_grid that are closest to the draws
distance_minimizing_index = np.argmin(distance,axis=0)
# For each point in x_grid, the approximate probability of that point is the number
# of Monte Carlo draws that are closest to that point
p_vec = np.zeros_like(x_grid)
for p_index,p in enumerate(p_vec):
p_vec[p_index] = np.sum(distance_minimizing_index==p_index) / N_draws
# Check for obvious errors, and return p_vec
assert (np.all(p_vec>=0.)) and (np.all(p_vec<=1.)) and (np.isclose(np.sum(p_vec)),1.)
return p_vec
|
def makeMarkovApproxToNormalByMonteCarlo(x_grid,mu,sigma,N_draws = 10000)
|
Creates an approximation to a normal distribution with mean mu and standard
deviation sigma, by Monte Carlo.
Returns a stochastic vector called p_vec, corresponding
to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation
of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).
Parameters
----------
x_grid: numpy.array
A sorted 1D array of floats representing discrete values that a normally
distributed RV could take on.
mu: float
Mean of the normal distribution to be approximated.
sigma: float
Standard deviation of the normal distribution to be approximated.
N_draws: int
Number of draws to use in Monte Carlo.
Returns
-------
p_vec: numpy.array
A stochastic vector with probability weights for each x in x_grid.
| 3.167852 | 1.765979 | 1.793822 |
'''
Function to return a discretized version of an AR1 process.
See http://www.fperri.net/TEACHING/macrotheory08/numerical.pdf for details
Parameters
----------
N: int
Size of discretized grid
sigma: float
Standard deviation of the error term
rho: float
AR1 coefficient
bound: float
The highest (lowest) grid point will be bound (-bound) multiplied by the unconditional
standard deviation of the process
Returns
-------
y: np.array
Grid points on which the discretized process takes values
trans_matrix: np.array
Markov transition array for the discretized process
Written by Edmund S. Crawley
Latest update: 27 October 2017
'''
yN = bound*sigma/((1-rho**2)**0.5)
y = np.linspace(-yN,yN,N)
d = y[1]-y[0]
trans_matrix = np.ones((N,N))
for j in range(N):
for k_1 in range(N-2):
k=k_1+1
trans_matrix[j,k] = stats.norm.cdf((y[k] + d/2.0 - rho*y[j])/sigma) - stats.norm.cdf((y[k] - d/2.0 - rho*y[j])/sigma)
trans_matrix[j,0] = stats.norm.cdf((y[0] + d/2.0 - rho*y[j])/sigma)
trans_matrix[j,N-1] = 1.0 - stats.norm.cdf((y[N-1] - d/2.0 - rho*y[j])/sigma)
return y, trans_matrix
|
def makeTauchenAR1(N, sigma=1.0, rho=0.9, bound=3.0)
|
Function to return a discretized version of an AR1 process.
See http://www.fperri.net/TEACHING/macrotheory08/numerical.pdf for details
Parameters
----------
N: int
Size of discretized grid
sigma: float
Standard deviation of the error term
rho: float
AR1 coefficient
bound: float
The highest (lowest) grid point will be bound (-bound) multiplied by the unconditional
standard deviation of the process
Returns
-------
y: np.array
Grid points on which the discretized process takes values
trans_matrix: np.array
Markov transition array for the discretized process
Written by Edmund S. Crawley
Latest update: 27 October 2017
| 3.873024 | 1.572337 | 2.463228 |
'''
Adds a discrete outcome of x with probability p to an existing distribution,
holding constant the relative probabilities of other outcomes and overall mean.
Parameters
----------
distribution : [np.array]
Two element list containing a list of probabilities and a list of outcomes.
x : float
The new value to be added to the distribution.
p : float
The probability of the discrete outcome x occuring.
sort: bool
Whether or not to sort X before returning it
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Matthew N. White
Latest update: 08 December 2015 by David Low
'''
X = np.append(x,distribution[1]*(1-p*x)/(1-p))
pmf = np.append(p,distribution[0]*(1-p))
if sort:
indices = np.argsort(X)
X = X[indices]
pmf = pmf[indices]
return([pmf,X])
|
def addDiscreteOutcomeConstantMean(distribution, x, p, sort = False)
|
Adds a discrete outcome of x with probability p to an existing distribution,
holding constant the relative probabilities of other outcomes and overall mean.
Parameters
----------
distribution : [np.array]
Two element list containing a list of probabilities and a list of outcomes.
x : float
The new value to be added to the distribution.
p : float
The probability of the discrete outcome x occuring.
sort: bool
Whether or not to sort X before returning it
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Matthew N. White
Latest update: 08 December 2015 by David Low
| 6.250928 | 1.673028 | 3.736296 |
'''
Given n lists (or tuples) whose elements represent n independent, discrete
probability spaces (probabilities and values), construct a joint pmf over
all combinations of these independent points. Can take multivariate discrete
distributions as inputs.
Parameters
----------
distributions : [np.array]
Arbitrary number of distributions (pmfs). Each pmf is a list or tuple.
For each pmf, the first vector is probabilities and all subsequent vectors
are values. For each pmf, this should be true:
len(X_pmf[0]) == len(X_pmf[j]) for j in range(1,len(distributions))
Returns
-------
List of arrays, consisting of:
P_out: np.array
Probability associated with each point in X_out.
X_out: np.array (as many as in *distributions)
Discrete points for the joint discrete probability mass function.
Written by Nathan Palmer
Latest update: 5 July August 2017 by Matthew N White
'''
# Very quick and incomplete parameter check:
for dist in distributions:
assert len(dist[0]) == len(dist[-1]), "len(dist[0]) != len(dist[-1])"
# Get information on the distributions
dist_lengths = ()
dist_dims = ()
for dist in distributions:
dist_lengths += (len(dist[0]),)
dist_dims += (len(dist)-1,)
number_of_distributions = len(distributions)
# Initialize lists we will use
X_out = []
P_temp = []
# Now loop through the distributions, tiling and flattening as necessary.
for dd,dist in enumerate(distributions):
# The shape we want before we tile
dist_newshape = (1,) * dd + (len(dist[0]),) + \
(1,) * (number_of_distributions - dd)
# The tiling we want to do
dist_tiles = dist_lengths[:dd] + (1,) + dist_lengths[dd+1:]
# Now we are ready to tile.
# We don't use the np.meshgrid commands, because they do not
# easily support non-symmetric grids.
# First deal with probabilities
Pmesh = np.tile(dist[0].reshape(dist_newshape),dist_tiles) # Tiling
flatP = Pmesh.ravel() # Flatten the tiled arrays
P_temp += [flatP,] #Add the flattened arrays to the output lists
# Then loop through each value variable
for n in range(1,dist_dims[dd]+1):
Xmesh = np.tile(dist[n].reshape(dist_newshape),dist_tiles)
flatX = Xmesh.ravel()
X_out += [flatX,]
# We're done getting the flattened X_out arrays we wanted.
# However, we have a bunch of flattened P_temp arrays, and just want one
# probability array. So get the probability array, P_out, here.
P_out = np.prod(np.array(P_temp),axis=0)
assert np.isclose(np.sum(P_out),1),'Probabilities do not sum to 1!'
return [P_out,] + X_out
|
def combineIndepDstns(*distributions)
|
Given n lists (or tuples) whose elements represent n independent, discrete
probability spaces (probabilities and values), construct a joint pmf over
all combinations of these independent points. Can take multivariate discrete
distributions as inputs.
Parameters
----------
distributions : [np.array]
Arbitrary number of distributions (pmfs). Each pmf is a list or tuple.
For each pmf, the first vector is probabilities and all subsequent vectors
are values. For each pmf, this should be true:
len(X_pmf[0]) == len(X_pmf[j]) for j in range(1,len(distributions))
Returns
-------
List of arrays, consisting of:
P_out: np.array
Probability associated with each point in X_out.
X_out: np.array (as many as in *distributions)
Discrete points for the joint discrete probability mass function.
Written by Nathan Palmer
Latest update: 5 July August 2017 by Matthew N White
| 5.336797 | 2.672396 | 1.997008 |
'''
Make a multi-exponentially spaced grid.
Parameters
----------
ming : float
Minimum value of the grid
maxg : float
Maximum value of the grid
ng : int
The number of grid points
timestonest : int
the number of times to nest the exponentiation
Returns
-------
points : np.array
A multi-exponentially spaced grid
Original Matab code can be found in Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 01 May 2015
'''
if timestonest > 0:
Lming = ming
Lmaxg = maxg
for j in range(timestonest):
Lming = np.log(Lming + 1)
Lmaxg = np.log(Lmaxg + 1)
Lgrid = np.linspace(Lming,Lmaxg,ng)
grid = Lgrid
for j in range(timestonest):
grid = np.exp(grid) - 1
else:
Lming = np.log(ming)
Lmaxg = np.log(maxg)
Lstep = (Lmaxg - Lming)/(ng - 1)
Lgrid = np.arange(Lming,Lmaxg+0.000001,Lstep)
grid = np.exp(Lgrid)
return(grid)
|
def makeGridExpMult(ming, maxg, ng, timestonest=20)
|
Make a multi-exponentially spaced grid.
Parameters
----------
ming : float
Minimum value of the grid
maxg : float
Maximum value of the grid
ng : int
The number of grid points
timestonest : int
the number of times to nest the exponentiation
Returns
-------
points : np.array
A multi-exponentially spaced grid
Original Matab code can be found in Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 01 May 2015
| 3.522731 | 1.587502 | 2.21904 |
'''
Generates a weighted average of simulated data. The Nth row of data is averaged
and then weighted by the Nth element of weights in an aggregate average.
Parameters
----------
data : numpy.array
An array of data with N rows of J floats
weights : numpy.array
A length N array of weights for the N rows of data.
Returns
-------
weighted_sum : float
The weighted sum of the data.
'''
data_avg = np.mean(data,axis=1)
weighted_sum = np.dot(data_avg,weights)
return weighted_sum
|
def calcWeightedAvg(data,weights)
|
Generates a weighted average of simulated data. The Nth row of data is averaged
and then weighted by the Nth element of weights in an aggregate average.
Parameters
----------
data : numpy.array
An array of data with N rows of J floats
weights : numpy.array
A length N array of weights for the N rows of data.
Returns
-------
weighted_sum : float
The weighted sum of the data.
| 4.941081 | 1.550931 | 3.18588 |
'''
Calculates the requested percentiles of (weighted) data. Median by default.
Parameters
----------
data : numpy.array
A 1D array of float data.
weights : np.array
A weighting vector for the data.
percentiles : [float]
A list of percentiles to calculate for the data. Each element should
be in (0,1).
presorted : boolean
Indicator for whether data has already been sorted.
Returns
-------
pctl_out : numpy.array
The requested percentiles of the data.
'''
if weights is None: # Set equiprobable weights if none were passed
weights = np.ones(data.size)/float(data.size)
if presorted: # Sort the data if it is not already
data_sorted = data
weights_sorted = weights
else:
order = np.argsort(data)
data_sorted = data[order]
weights_sorted = weights[order]
cum_dist = np.cumsum(weights_sorted)/np.sum(weights_sorted) # cumulative probability distribution
# Calculate the requested percentiles by interpolating the data over the
# cumulative distribution, then evaluating at the percentile values
inv_CDF = interp1d(cum_dist,data_sorted,bounds_error=False,assume_sorted=True)
pctl_out = inv_CDF(percentiles)
return pctl_out
|
def getPercentiles(data,weights=None,percentiles=[0.5],presorted=False)
|
Calculates the requested percentiles of (weighted) data. Median by default.
Parameters
----------
data : numpy.array
A 1D array of float data.
weights : np.array
A weighting vector for the data.
percentiles : [float]
A list of percentiles to calculate for the data. Each element should
be in (0,1).
presorted : boolean
Indicator for whether data has already been sorted.
Returns
-------
pctl_out : numpy.array
The requested percentiles of the data.
| 3.140573 | 2.028256 | 1.548411 |
'''
Calculates the Lorenz curve at the requested percentiles of (weighted) data.
Median by default.
Parameters
----------
data : numpy.array
A 1D array of float data.
weights : numpy.array
A weighting vector for the data.
percentiles : [float]
A list of percentiles to calculate for the data. Each element should
be in (0,1).
presorted : boolean
Indicator for whether data has already been sorted.
Returns
-------
lorenz_out : numpy.array
The requested Lorenz curve points of the data.
'''
if weights is None: # Set equiprobable weights if none were given
weights = np.ones(data.size)
if presorted: # Sort the data if it is not already
data_sorted = data
weights_sorted = weights
else:
order = np.argsort(data)
data_sorted = data[order]
weights_sorted = weights[order]
cum_dist = np.cumsum(weights_sorted)/np.sum(weights_sorted) # cumulative probability distribution
temp = data_sorted*weights_sorted
cum_data = np.cumsum(temp)/sum(temp) # cumulative ownership shares
# Calculate the requested Lorenz shares by interpolating the cumulative ownership
# shares over the cumulative distribution, then evaluating at requested points
lorenzFunc = interp1d(cum_dist,cum_data,bounds_error=False,assume_sorted=True)
lorenz_out = lorenzFunc(percentiles)
return lorenz_out
|
def getLorenzShares(data,weights=None,percentiles=[0.5],presorted=False)
|
Calculates the Lorenz curve at the requested percentiles of (weighted) data.
Median by default.
Parameters
----------
data : numpy.array
A 1D array of float data.
weights : numpy.array
A weighting vector for the data.
percentiles : [float]
A list of percentiles to calculate for the data. Each element should
be in (0,1).
presorted : boolean
Indicator for whether data has already been sorted.
Returns
-------
lorenz_out : numpy.array
The requested Lorenz curve points of the data.
| 3.378808 | 2.115694 | 1.597021 |
'''
Calculates the average of (weighted) data between cutoff percentiles of a
reference variable.
Parameters
----------
data : numpy.array
A 1D array of float data.
reference : numpy.array
A 1D array of float data of the same length as data.
cutoffs : [(float,float)]
A list of doubles with the lower and upper percentile bounds (should be
in [0,1]).
weights : numpy.array
A weighting vector for the data.
Returns
-------
slice_avg
The (weighted) average of data that falls within the cutoff percentiles
of reference.
'''
if weights is None: # Set equiprobable weights if none were given
weights = np.ones(data.size)
# Sort the data and generate a cumulative distribution
order = np.argsort(reference)
data_sorted = data[order]
weights_sorted = weights[order]
cum_dist = np.cumsum(weights_sorted)/np.sum(weights_sorted)
# For each set of cutoffs, calculate the average of data that falls within
# the cutoff percentiles of reference
slice_avg = []
for j in range(len(cutoffs)):
bot = np.searchsorted(cum_dist,cutoffs[j][0])
top = np.searchsorted(cum_dist,cutoffs[j][1])
slice_avg.append(np.sum(data_sorted[bot:top]*weights_sorted[bot:top])/
np.sum(weights_sorted[bot:top]))
return slice_avg
|
def calcSubpopAvg(data,reference,cutoffs,weights=None)
|
Calculates the average of (weighted) data between cutoff percentiles of a
reference variable.
Parameters
----------
data : numpy.array
A 1D array of float data.
reference : numpy.array
A 1D array of float data of the same length as data.
cutoffs : [(float,float)]
A list of doubles with the lower and upper percentile bounds (should be
in [0,1]).
weights : numpy.array
A weighting vector for the data.
Returns
-------
slice_avg
The (weighted) average of data that falls within the cutoff percentiles
of reference.
| 2.856473 | 1.751296 | 1.631062 |
'''
Performs a non-parametric Nadaraya-Watson 1D kernel regression on given data
with optionally specified range, number of points, and kernel bandwidth.
Parameters
----------
x : np.array
The independent variable in the kernel regression.
y : np.array
The dependent variable in the kernel regression.
bot : float
Minimum value of interest in the regression; defaults to min(x).
top : float
Maximum value of interest in the regression; defaults to max(y).
N : int
Number of points to compute.
h : float
The bandwidth of the (Epanechnikov) kernel. To-do: GENERALIZE.
Returns
-------
regression : LinearInterp
A piecewise locally linear kernel regression: y = f(x).
'''
# Fix omitted inputs
if bot is None:
bot = np.min(x)
if top is None:
top = np.max(x)
if h is None:
h = 2.0*(top - bot)/float(N) # This is an arbitrary default
# Construct a local linear approximation
x_vec = np.linspace(bot,top,num=N)
y_vec = np.zeros_like(x_vec) + np.nan
for j in range(N):
x_here = x_vec[j]
weights = epanechnikovKernel(x,x_here,h)
y_vec[j] = np.dot(weights,y)/np.sum(weights)
regression = interp1d(x_vec,y_vec,bounds_error=False,assume_sorted=True)
return regression
|
def kernelRegression(x,y,bot=None,top=None,N=500,h=None)
|
Performs a non-parametric Nadaraya-Watson 1D kernel regression on given data
with optionally specified range, number of points, and kernel bandwidth.
Parameters
----------
x : np.array
The independent variable in the kernel regression.
y : np.array
The dependent variable in the kernel regression.
bot : float
Minimum value of interest in the regression; defaults to min(x).
top : float
Maximum value of interest in the regression; defaults to max(y).
N : int
Number of points to compute.
h : float
The bandwidth of the (Epanechnikov) kernel. To-do: GENERALIZE.
Returns
-------
regression : LinearInterp
A piecewise locally linear kernel regression: y = f(x).
| 3.438936 | 1.774519 | 1.937954 |
'''
The Epanechnikov kernel.
Parameters
----------
x : np.array
Values at which to evaluate the kernel
x_ref : float
The reference point
h : float
Kernel bandwidth
Returns
-------
out : np.array
Kernel values at each value of x
'''
u = (x-ref_x)/h # Normalize distance by bandwidth
these = np.abs(u) <= 1.0 # Kernel = 0 outside [-1,1]
out = np.zeros_like(x) # Initialize kernel output
out[these] = 0.75*(1.0-u[these]**2.0) # Evaluate kernel
return out
|
def epanechnikovKernel(x,ref_x,h=1.0)
|
The Epanechnikov kernel.
Parameters
----------
x : np.array
Values at which to evaluate the kernel
x_ref : float
The reference point
h : float
Kernel bandwidth
Returns
-------
out : np.array
Kernel values at each value of x
| 3.816212 | 2.82501 | 1.350867 |
'''
Plots 1D function(s) over a given range.
Parameters
----------
functions : [function] or function
A single function, or a list of functions, to be plotted.
bottom : float
The lower limit of the domain to be plotted.
top : float
The upper limit of the domain to be plotted.
N : int
Number of points in the domain to evaluate.
legend_kwds: None, or dictionary
If not None, the keyword dictionary to pass to plt.legend
Returns
-------
none
'''
if type(functions)==list:
function_list = functions
else:
function_list = [functions]
for function in function_list:
x = np.linspace(bottom,top,N,endpoint=True)
y = function(x)
plt.plot(x,y)
plt.xlim([bottom, top])
if legend_kwds is not None:
plt.legend(**legend_kwds)
plt.show()
|
def plotFuncs(functions,bottom,top,N=1000,legend_kwds = None)
|
Plots 1D function(s) over a given range.
Parameters
----------
functions : [function] or function
A single function, or a list of functions, to be plotted.
bottom : float
The lower limit of the domain to be plotted.
top : float
The upper limit of the domain to be plotted.
N : int
Number of points in the domain to evaluate.
legend_kwds: None, or dictionary
If not None, the keyword dictionary to pass to plt.legend
Returns
-------
none
| 2.287618 | 1.445017 | 1.583108 |
'''
Plots the first derivative of 1D function(s) over a given range.
Parameters
----------
function : function
A function or list of functions, the derivatives of which are to be plotted.
bottom : float
The lower limit of the domain to be plotted.
top : float
The upper limit of the domain to be plotted.
N : int
Number of points in the domain to evaluate.
legend_kwds: None, or dictionary
If not None, the keyword dictionary to pass to plt.legend
Returns
-------
none
'''
if type(functions)==list:
function_list = functions
else:
function_list = [functions]
step = (top-bottom)/N
for function in function_list:
x = np.arange(bottom,top,step)
y = function.derivative(x)
plt.plot(x,y)
plt.xlim([bottom, top])
if legend_kwds is not None:
plt.legend(**legend_kwds)
plt.show()
|
def plotFuncsDer(functions,bottom,top,N=1000,legend_kwds = None)
|
Plots the first derivative of 1D function(s) over a given range.
Parameters
----------
function : function
A function or list of functions, the derivatives of which are to be plotted.
bottom : float
The lower limit of the domain to be plotted.
top : float
The upper limit of the domain to be plotted.
N : int
Number of points in the domain to evaluate.
legend_kwds: None, or dictionary
If not None, the keyword dictionary to pass to plt.legend
Returns
-------
none
| 2.517834 | 1.490149 | 1.689652 |
'''
Runs regressions for the main tables of the StickyC paper in Stata and produces a
LaTeX table with results for one "panel". Running in Stata allows production of
the KP-statistic, for which there is currently no command in statsmodels.api.
Parameters
----------
infile_name : str
Name of tab-delimited text file with simulation data. Assumed to be in
the results directory, and was almost surely generated by makeStickyEdataFile
unless we resort to fabricating simulated data. THAT'S A JOKE, FUTURE REFEREES.
interval_size : int
Number of periods in each regression sample (or interval).
meas_err : bool
Indicator for whether to add measurement error to DeltaLogC.
sticky : bool
Indicator for whether these results used sticky expectations.
all_specs : bool
Indicator for whether this panel should include all specifications or
just the OLS on lagged consumption growth.
stata_exe : str
Absolute location where the Stata executable can be found on the computer
running this code. Usually set at the top of StickyEparams.py.
Returns
-------
panel_text : str
String with one panel's worth of LaTeX input.
'''
dofile = "StickyETimeSeries.do"
infile_name_full = os.path.abspath(results_dir + infile_name + ".txt")
temp_name_full = os.path.abspath(results_dir + "temp.txt")
if meas_err:
meas_err_stata = 1
else:
meas_err_stata = 0
# Define the command to run the Stata do file
cmd = [stata_exe, "do", dofile, infile_name_full, temp_name_full, str(interval_size), str(meas_err_stata)]
# Run Stata do-file
stata_status = subprocess.call(cmd,shell = 'true')
if stata_status!=0:
raise ValueError('Stata code could not run. Check the stata_exe in StickyEparams.py')
stata_output = pd.read_csv(temp_name_full, sep=',',header=0)
# Make results table and return it
panel_text = makeResultsPanel(Coeffs=stata_output.CoeffsArray,
StdErrs=stata_output.StdErrArray,
Rsq=stata_output.RsqArray,
Pvals=stata_output.PvalArray,
OID=stata_output.OIDarray,
Counts=stata_output.ExtraInfo,
meas_err=meas_err,
sticky=sticky,
all_specs=all_specs)
return panel_text
|
def runStickyEregressionsInStata(infile_name,interval_size,meas_err,sticky,all_specs,stata_exe)
|
Runs regressions for the main tables of the StickyC paper in Stata and produces a
LaTeX table with results for one "panel". Running in Stata allows production of
the KP-statistic, for which there is currently no command in statsmodels.api.
Parameters
----------
infile_name : str
Name of tab-delimited text file with simulation data. Assumed to be in
the results directory, and was almost surely generated by makeStickyEdataFile
unless we resort to fabricating simulated data. THAT'S A JOKE, FUTURE REFEREES.
interval_size : int
Number of periods in each regression sample (or interval).
meas_err : bool
Indicator for whether to add measurement error to DeltaLogC.
sticky : bool
Indicator for whether these results used sticky expectations.
all_specs : bool
Indicator for whether this panel should include all specifications or
just the OLS on lagged consumption growth.
stata_exe : str
Absolute location where the Stata executable can be found on the computer
running this code. Usually set at the top of StickyEparams.py.
Returns
-------
panel_text : str
String with one panel's worth of LaTeX input.
| 5.826763 | 1.99077 | 2.926889 |
'''
Calculate expected value of being born in each Markov state using the realizations
of consumption for a history of many consumers. The histories should already be
trimmed of the "burn in" periods.
Parameters
----------
cLvlHist : np.array
TxN array of consumption level history for many agents across many periods.
Agents who die are replaced by newborms.
BirthBool : np.array
TxN boolean array indicating when agents are born, replacing one who died.
PlvlHist : np.array
T length vector of aggregate permanent productivity levels.
MrkvHist : np.array
T length vector of integers for the Markov index in each period.
DiscFac : float
Intertemporal discount factor.
CRRA : float
Coefficient of relative risk aversion.
Returns
-------
vAtBirth : np.array
J length vector of average lifetime value at birth by Markov state.
'''
J = np.max(MrkvHist) + 1 # Number of Markov states
T = MrkvHist.size # Length of simulation
I = cLvlHist.shape[1] # Number of agent indices in histories
u = lambda c : CRRAutility(c,gam=CRRA)
# Initialize an array to hold each agent's lifetime utility
BirthsByPeriod = np.sum(BirthBool,axis=1)
BirthsByState = np.zeros(J,dtype=int)
for j in range(J):
these = MrkvHist == j
BirthsByState[j] = np.sum(BirthsByPeriod[these])
N = np.max(BirthsByState) # Array must hold this many agents per row at least
vArray = np.zeros((J,N)) + np.nan
n = np.zeros(J,dtype=int)
# Loop through each agent index
DiscVec = DiscFac**np.arange(T)
for i in range(I):
birth_t = np.where(BirthBool[:,i])[0]
# Loop through each agent who lived and died in this index
for k in range(birth_t.size-1): # Last birth event has no death, so ignore
# Get lifespan of this agent and circumstances at birth
t0 = birth_t[k]
t1 = birth_t[k+1]
span = t1-t0
j = MrkvHist[t0]
# Calculate discounted flow of utility for this agent and store it
cVec = cLvlHist[t0:t1,i]/PlvlHist[t0]
uVec = u(cVec)
v = np.dot(DiscVec[:span],uVec)
vArray[j,n[j]] = v
n[j] += 1
# Calculate expected value at birth by state and return it
vAtBirth = np.nanmean(vArray,axis=1)
return vAtBirth
|
def calcValueAtBirth(cLvlHist,BirthBool,PlvlHist,MrkvHist,DiscFac,CRRA)
|
Calculate expected value of being born in each Markov state using the realizations
of consumption for a history of many consumers. The histories should already be
trimmed of the "burn in" periods.
Parameters
----------
cLvlHist : np.array
TxN array of consumption level history for many agents across many periods.
Agents who die are replaced by newborms.
BirthBool : np.array
TxN boolean array indicating when agents are born, replacing one who died.
PlvlHist : np.array
T length vector of aggregate permanent productivity levels.
MrkvHist : np.array
T length vector of integers for the Markov index in each period.
DiscFac : float
Intertemporal discount factor.
CRRA : float
Coefficient of relative risk aversion.
Returns
-------
vAtBirth : np.array
J length vector of average lifetime value at birth by Markov state.
| 4.886784 | 2.786276 | 1.753876 |
'''
Make a discrete preference shock structure for each period in the cycle
for this agent type, storing them as attributes of self for use in the
solution (and other methods).
Parameters
----------
none
Returns
-------
none
'''
time_orig = self.time_flow
self.timeFwd()
PrefShkDstn = [] # discrete distributions of preference shocks
for t in range(len(self.PrefShkStd)):
PrefShkStd = self.PrefShkStd[t]
PrefShkDstn.append(approxMeanOneLognormal(N=self.PrefShkCount,
sigma=PrefShkStd,tail_N=self.PrefShk_tail_N))
# Store the preference shocks in self (time-varying) and restore time flow
self.PrefShkDstn = PrefShkDstn
self.addToTimeVary('PrefShkDstn')
if not time_orig:
self.timeRev()
|
def updatePrefShockProcess(self)
|
Make a discrete preference shock structure for each period in the cycle
for this agent type, storing them as attributes of self for use in the
solution (and other methods).
Parameters
----------
none
Returns
-------
none
| 6.875285 | 3.851059 | 1.785297 |
'''
Gets permanent and transitory income shocks for this period as well as preference shocks.
Parameters
----------
None
Returns
-------
None
'''
IndShockConsumerType.getShocks(self) # Get permanent and transitory income shocks
PrefShkNow = np.zeros(self.AgentCount) # Initialize shock array
for t in range(self.T_cycle):
these = t == self.t_cycle
N = np.sum(these)
if N > 0:
PrefShkNow[these] = self.RNG.permutation(approxMeanOneLognormal(N,sigma=self.PrefShkStd[t])[1])
self.PrefShkNow = PrefShkNow
|
def getShocks(self)
|
Gets permanent and transitory income shocks for this period as well as preference shocks.
Parameters
----------
None
Returns
-------
None
| 4.48037 | 3.556561 | 1.259748 |
'''
Calculates consumption for each consumer of this type using the consumption functions.
Parameters
----------
None
Returns
-------
None
'''
cNrmNow = np.zeros(self.AgentCount) + np.nan
for t in range(self.T_cycle):
these = t == self.t_cycle
cNrmNow[these] = self.solution[t].cFunc(self.mNrmNow[these],self.PrefShkNow[these])
self.cNrmNow = cNrmNow
return None
|
def getControls(self)
|
Calculates consumption for each consumer of this type using the consumption functions.
Parameters
----------
None
Returns
-------
None
| 5.453211 | 3.175424 | 1.717317 |
'''
Find endogenous interpolation points for each asset point and each
discrete preference shock.
Parameters
----------
EndOfPrdvP : np.array
Array of end-of-period marginal values.
aNrmNow : np.array
Array of end-of-period asset values that yield the marginal values
in EndOfPrdvP.
Returns
-------
c_for_interpolation : np.array
Consumption points for interpolation.
m_for_interpolation : np.array
Corresponding market resource points for interpolation.
'''
c_base = self.uPinv(EndOfPrdvP)
PrefShkCount = self.PrefShkVals.size
PrefShk_temp = np.tile(np.reshape(self.PrefShkVals**(1.0/self.CRRA),(PrefShkCount,1)),
(1,c_base.size))
self.cNrmNow = np.tile(c_base,(PrefShkCount,1))*PrefShk_temp
self.mNrmNow = self.cNrmNow + np.tile(aNrmNow,(PrefShkCount,1))
# Add the bottom point to the c and m arrays
m_for_interpolation = np.concatenate((self.BoroCnstNat*np.ones((PrefShkCount,1)),
self.mNrmNow),axis=1)
c_for_interpolation = np.concatenate((np.zeros((PrefShkCount,1)),self.cNrmNow),axis=1)
return c_for_interpolation,m_for_interpolation
|
def getPointsForInterpolation(self,EndOfPrdvP,aNrmNow)
|
Find endogenous interpolation points for each asset point and each
discrete preference shock.
Parameters
----------
EndOfPrdvP : np.array
Array of end-of-period marginal values.
aNrmNow : np.array
Array of end-of-period asset values that yield the marginal values
in EndOfPrdvP.
Returns
-------
c_for_interpolation : np.array
Consumption points for interpolation.
m_for_interpolation : np.array
Corresponding market resource points for interpolation.
| 3.314473 | 2.106835 | 1.5732 |
'''
Make a basic solution object with a consumption function and marginal
value function (unconditional on the preference shock).
Parameters
----------
cNrm : np.array
Consumption points for interpolation.
mNrm : np.array
Corresponding market resource points for interpolation.
interpolator : function
A function that constructs and returns a consumption function.
Returns
-------
solution_now : ConsumerSolution
The solution to this period's consumption-saving problem, with a
consumption function, marginal value function, and minimum m.
'''
# Make the preference-shock specific consumption functions
PrefShkCount = self.PrefShkVals.size
cFunc_list = []
for j in range(PrefShkCount):
MPCmin_j = self.MPCminNow*self.PrefShkVals[j]**(1.0/self.CRRA)
cFunc_this_shock = LowerEnvelope(LinearInterp(mNrm[j,:],cNrm[j,:],
intercept_limit=self.hNrmNow*MPCmin_j,
slope_limit=MPCmin_j),self.cFuncNowCnst)
cFunc_list.append(cFunc_this_shock)
# Combine the list of consumption functions into a single interpolation
cFuncNow = LinearInterpOnInterp1D(cFunc_list,self.PrefShkVals)
# Make the ex ante marginal value function (before the preference shock)
m_grid = self.aXtraGrid + self.mNrmMinNow
vP_vec = np.zeros_like(m_grid)
for j in range(PrefShkCount): # numeric integration over the preference shock
vP_vec += self.uP(cFunc_list[j](m_grid))*self.PrefShkPrbs[j]*self.PrefShkVals[j]
vPnvrs_vec = self.uPinv(vP_vec)
vPfuncNow = MargValueFunc(LinearInterp(m_grid,vPnvrs_vec),self.CRRA)
# Store the results in a solution object and return it
solution_now = ConsumerSolution(cFunc=cFuncNow, vPfunc=vPfuncNow, mNrmMin=self.mNrmMinNow)
return solution_now
|
def usePointsForInterpolation(self,cNrm,mNrm,interpolator)
|
Make a basic solution object with a consumption function and marginal
value function (unconditional on the preference shock).
Parameters
----------
cNrm : np.array
Consumption points for interpolation.
mNrm : np.array
Corresponding market resource points for interpolation.
interpolator : function
A function that constructs and returns a consumption function.
Returns
-------
solution_now : ConsumerSolution
The solution to this period's consumption-saving problem, with a
consumption function, marginal value function, and minimum m.
| 4.2291 | 2.865478 | 1.475879 |
'''
Make the beginning-of-period value function (unconditional on the shock).
Parameters
----------
solution : ConsumerSolution
The solution to this single period problem, which must include the
consumption function.
Returns
-------
vFuncNow : ValueFunc
A representation of the value function for this period, defined over
normalized market resources m: v = vFuncNow(m).
'''
# Compute expected value and marginal value on a grid of market resources,
# accounting for all of the discrete preference shocks
PrefShkCount = self.PrefShkVals.size
mNrm_temp = self.mNrmMinNow + self.aXtraGrid
vNrmNow = np.zeros_like(mNrm_temp)
vPnow = np.zeros_like(mNrm_temp)
for j in range(PrefShkCount):
this_shock = self.PrefShkVals[j]
this_prob = self.PrefShkPrbs[j]
cNrmNow = solution.cFunc(mNrm_temp,this_shock*np.ones_like(mNrm_temp))
aNrmNow = mNrm_temp - cNrmNow
vNrmNow += this_prob*(this_shock*self.u(cNrmNow) + self.EndOfPrdvFunc(aNrmNow))
vPnow += this_prob*this_shock*self.uP(cNrmNow)
# Construct the beginning-of-period value function
vNvrs = self.uinv(vNrmNow) # value transformed through inverse utility
vNvrsP = vPnow*self.uinvP(vNrmNow)
mNrm_temp = np.insert(mNrm_temp,0,self.mNrmMinNow)
vNvrs = np.insert(vNvrs,0,0.0)
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff**(-self.CRRA/(1.0-self.CRRA)))
MPCminNvrs = self.MPCminNow**(-self.CRRA/(1.0-self.CRRA))
vNvrsFuncNow = CubicInterp(mNrm_temp,vNvrs,vNvrsP,MPCminNvrs*self.hNrmNow,MPCminNvrs)
vFuncNow = ValueFunc(vNvrsFuncNow,self.CRRA)
return vFuncNow
|
def makevFunc(self,solution)
|
Make the beginning-of-period value function (unconditional on the shock).
Parameters
----------
solution : ConsumerSolution
The solution to this single period problem, which must include the
consumption function.
Returns
-------
vFuncNow : ValueFunc
A representation of the value function for this period, defined over
normalized market resources m: v = vFuncNow(m).
| 3.391237 | 2.49447 | 1.359502 |
'''
Makes new consumers for the given indices. Slightly extends base method by also setting
pLvlErrNow = 1.0 for new agents, indicating that they correctly perceive their productivity.
Parameters
----------
which_agents : np.array(Bool)
Boolean array of size self.AgentCount indicating which agents should be "born".
Returns
-------
None
'''
AggShockConsumerType.simBirth(self,which_agents)
if hasattr(self,'pLvlErrNow'):
self.pLvlErrNow[which_agents] = 1.0
else:
self.pLvlErrNow = np.ones(self.AgentCount)
|
def simBirth(self,which_agents)
|
Makes new consumers for the given indices. Slightly extends base method by also setting
pLvlErrNow = 1.0 for new agents, indicating that they correctly perceive their productivity.
Parameters
----------
which_agents : np.array(Bool)
Boolean array of size self.AgentCount indicating which agents should be "born".
Returns
-------
None
| 6.676734 | 1.623109 | 4.113547 |
'''
Determine which agents update this period vs which don't. Fills in the
attributes update and dont as boolean arrays of size AgentCount.
Parameters
----------
None
Returns
-------
None
'''
how_many_update = int(round(self.UpdatePrb*self.AgentCount))
base_bool = np.zeros(self.AgentCount,dtype=bool)
base_bool[0:how_many_update] = True
self.update = self.RNG.permutation(base_bool)
self.dont = np.logical_not(self.update)
|
def getUpdaters(self)
|
Determine which agents update this period vs which don't. Fills in the
attributes update and dont as boolean arrays of size AgentCount.
Parameters
----------
None
Returns
-------
None
| 6.680993 | 2.507555 | 2.664345 |
'''
Gets permanent and transitory shocks (combining idiosyncratic and aggregate shocks), but
only consumers who update their macroeconomic beliefs this period incorporate all pre-
viously unnoticed aggregate permanent shocks. Agents correctly observe the level of all
real variables (market resources, consumption, assets, etc), but misperceive the aggregate
productivity level.
Parameters
----------
None
Returns
-------
None
'''
# The strange syntax here is so that both StickyEconsumerType and StickyEmarkovConsumerType
# run the getShocks method of their first superclass: AggShockConsumerType and
# AggShockMarkovConsumerType respectively. This will be simplified in Python 3.
super(self.__class__,self).getShocks() # Get permanent and transitory combined shocks
newborns = self.t_age == 0
self.TranShkNow[newborns] = self.TranShkAggNow*self.wRteNow # Turn off idiosyncratic shocks for newborns
self.PermShkNow[newborns] = self.PermShkAggNow
self.getUpdaters() # Randomly draw which agents will update their beliefs
# Calculate innovation to the productivity level perception error
pLvlErrNew = self.getpLvlError()
self.pLvlErrNow *= pLvlErrNew # Perception error accumulation
# Calculate (mis)perceptions of the permanent shock
PermShkPcvd = self.PermShkNow/pLvlErrNew
PermShkPcvd[self.update] *= self.pLvlErrNow[self.update] # Updaters see the true permanent shock and all missed news
self.pLvlErrNow[self.update] = 1.0
self.PermShkNow = PermShkPcvd
|
def getShocks(self)
|
Gets permanent and transitory shocks (combining idiosyncratic and aggregate shocks), but
only consumers who update their macroeconomic beliefs this period incorporate all pre-
viously unnoticed aggregate permanent shocks. Agents correctly observe the level of all
real variables (market resources, consumption, assets, etc), but misperceive the aggregate
productivity level.
Parameters
----------
None
Returns
-------
None
| 9.402569 | 4.783357 | 1.965684 |
'''
Gets simulated consumers pLvl and mNrm for this period, but with the alteration that these
represent perceived rather than actual values. Also calculates mLvlTrue, the true level of
market resources that the individual has on hand.
Parameters
----------
None
Returns
-------
None
'''
# Update consumers' perception of their permanent income level
pLvlPrev = self.pLvlNow
self.pLvlNow = pLvlPrev*self.PermShkNow # Perceived permanent income level (only correct if macro state is observed this period)
self.PlvlAggNow *= self.PermShkAggNow # Updated aggregate permanent productivity level
self.pLvlTrue = self.pLvlNow*self.pLvlErrNow
# Calculate what the consumers perceive their normalized market resources to be
RfreeNow = self.getRfree()
bLvlNow = RfreeNow*self.aLvlNow # This is the true level
yLvlNow = self.pLvlTrue*self.TranShkNow # This is true income level
mLvlTrueNow = bLvlNow + yLvlNow # This is true market resource level
mNrmPcvdNow = mLvlTrueNow/self.pLvlNow # This is perceived normalized resources
self.mNrmNow = mNrmPcvdNow
self.mLvlTrueNow = mLvlTrueNow
self.yLvlNow = yLvlNow
|
def getStates(self)
|
Gets simulated consumers pLvl and mNrm for this period, but with the alteration that these
represent perceived rather than actual values. Also calculates mLvlTrue, the true level of
market resources that the individual has on hand.
Parameters
----------
None
Returns
-------
None
| 6.897961 | 3.844877 | 1.794065 |
'''
Slightly extends the base version of this method by recalculating aLvlNow to account for the
consumer's (potential) misperception about their productivity level.
Parameters
----------
None
Returns
-------
None
'''
AggShockConsumerType.getPostStates(self)
self.cLvlNow = self.cNrmNow*self.pLvlNow # True consumption level
self.aLvlNow = self.mLvlTrueNow - self.cLvlNow # True asset level
self.aNrmNow = self.aLvlNow/self.pLvlNow
|
def getPostStates(self)
|
Slightly extends the base version of this method by recalculating aLvlNow to account for the
consumer's (potential) misperception about their productivity level.
Parameters
----------
None
Returns
-------
None
| 8.597907 | 2.725035 | 3.155155 |
'''
Determine which agents update this period vs which don't. Fills in the
attributes update and dont as boolean arrays of size AgentCount. This
version also updates perceptions of the Markov state.
Parameters
----------
None
Returns
-------
None
'''
StickyEconsumerType.getUpdaters(self)
# Only updaters change their perception of the Markov state
if hasattr(self,'MrkvNowPcvd'):
self.MrkvNowPcvd[self.update] = self.MrkvNow
else: # This only triggers in the first simulated period
self.MrkvNowPcvd = np.ones(self.AgentCount,dtype=int)*self.MrkvNow
|
def getUpdaters(self)
|
Determine which agents update this period vs which don't. Fills in the
attributes update and dont as boolean arrays of size AgentCount. This
version also updates perceptions of the Markov state.
Parameters
----------
None
Returns
-------
None
| 11.946505 | 3.808974 | 3.13641 |
'''
Calculates and returns the misperception of this period's shocks. Updaters
have no misperception this period, while those who don't update don't see
the value of the aggregate permanent shock and thus base their belief about
aggregate growth on the last Markov state that they actually observed,
which is stored in MrkvNowPcvd.
Parameters
----------
None
Returns
-------
pLvlErr : np.array
Array of size AgentCount with this period's (new) misperception.
'''
pLvlErr = np.ones(self.AgentCount)
pLvlErr[self.dont] = self.PermShkAggNow/self.PermGroFacAgg[self.MrkvNowPcvd[self.dont]]
return pLvlErr
|
def getpLvlError(self)
|
Calculates and returns the misperception of this period's shocks. Updaters
have no misperception this period, while those who don't update don't see
the value of the aggregate permanent shock and thus base their belief about
aggregate growth on the last Markov state that they actually observed,
which is stored in MrkvNowPcvd.
Parameters
----------
None
Returns
-------
pLvlErr : np.array
Array of size AgentCount with this period's (new) misperception.
| 14.081739 | 1.788323 | 7.874273 |
'''
Makes new consumers for the given indices. Slightly extends base method by also setting
pLvlTrue = 1.0 in the very first simulated period.
Parameters
----------
which_agents : np.array(Bool)
Boolean array of size self.AgentCount indicating which agents should be "born".
Returns
-------
None
'''
super(self.__class__,self).simBirth(which_agents)
if self.t_sim == 0: # Make sure that pLvlTrue and aLvlNow exist
self.pLvlTrue = np.ones(self.AgentCount)
self.aLvlNow = self.aNrmNow*self.pLvlTrue
|
def simBirth(self,which_agents)
|
Makes new consumers for the given indices. Slightly extends base method by also setting
pLvlTrue = 1.0 in the very first simulated period.
Parameters
----------
which_agents : np.array(Bool)
Boolean array of size self.AgentCount indicating which agents should be "born".
Returns
-------
None
| 7.039732 | 2.159678 | 3.25962 |
'''
Calculates updated values of normalized market resources and permanent income level.
Makes both perceived and true values. The representative consumer will act on the
basis of his *perceived* normalized market resources.
Parameters
----------
None
Returns
-------
None
'''
# Calculate perceived and true productivity level
aLvlPrev = self.aLvlNow
self.pLvlTrue = self.pLvlTrue*self.PermShkNow
self.pLvlNow = self.getpLvlPcvd()
# Calculate true values of variables
self.kNrmTrue = aLvlPrev/self.pLvlTrue
self.yNrmTrue = self.kNrmTrue**self.CapShare*self.TranShkNow**(1.-self.CapShare) - self.kNrmTrue*self.DeprFac
self.Rfree = 1. + self.CapShare*self.kNrmTrue**(self.CapShare-1.)*self.TranShkNow**(1.-self.CapShare) - self.DeprFac
self.wRte = (1.-self.CapShare)*self.kNrmTrue**self.CapShare*self.TranShkNow**(-self.CapShare)
self.mNrmTrue = self.Rfree*self.kNrmTrue + self.wRte*self.TranShkNow
self.mLvlTrue = self.mNrmTrue*self.pLvlTrue
# Calculate perception of normalized market resources
self.mNrmNow = self.mLvlTrue/self.pLvlNow
|
def getStates(self)
|
Calculates updated values of normalized market resources and permanent income level.
Makes both perceived and true values. The representative consumer will act on the
basis of his *perceived* normalized market resources.
Parameters
----------
None
Returns
-------
None
| 4.720739 | 2.848484 | 1.657282 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.