code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
max_sigma = 2.0 * math.pow(np.nanmax(np.std(R, axis=0)), 2) return max_sigma
def _get_max_sigma(self, R)
Calculate maximum sigma of scanner RAS coordinates Parameters ---------- R : 2D array, with shape [n_voxel, n_dim] The coordinate matrix of fMRI data from one subject Returns ------- max_sigma : float The maximum sigma of scanner coordinates.
4.91178
5.244704
0.936522
max_sigma = self._get_max_sigma(R) final_lower = np.zeros(self.K * (self.n_dim + 1)) final_lower[0:self.K * self.n_dim] =\ np.tile(np.nanmin(R, axis=0), self.K) final_lower[self.K * self.n_dim:] =\ np.repeat(self.lower_ratio * max_sigma, self.K) final_upper = np.zeros(self.K * (self.n_dim + 1)) final_upper[0:self.K * self.n_dim] =\ np.tile(np.nanmax(R, axis=0), self.K) final_upper[self.K * self.n_dim:] =\ np.repeat(self.upper_ratio * max_sigma, self.K) bounds = (final_lower, final_upper) return bounds
def get_bounds(self, R)
Calculate lower and upper bounds for centers and widths Parameters ---------- R : 2D array, with shape [n_voxel, n_dim] The coordinate matrix of fMRI data from one subject Returns ------- bounds : 2-tuple of array_like, default: None The lower and upper bounds on factor's centers and widths.
1.8959
1.842941
1.028736
centers = self.get_centers(estimate) widths = self.get_widths(estimate) recon = X.size other_err = 0 if template_centers is None else (2 * self.K) final_err = np.zeros(recon + other_err) F = self.get_factors(unique_R, inds, centers, widths) sigma = np.zeros((1,)) sigma[0] = data_sigma tfa_extension.recon(final_err[0:recon], X, F, W, sigma) if other_err > 0: # center error for k in np.arange(self.K): diff = (centers[k] - template_centers[k]) cov = from_tri_2_sym(template_centers_mean_cov[k], self.n_dim) final_err[recon + k] = math.sqrt( self.sample_scaling * diff.dot(np.linalg.solve(cov, diff.T))) # width error base = recon + self.K dist = template_widths_mean_var_reci *\ (widths - template_widths) ** 2 final_err[base:] = np.sqrt(self.sample_scaling * dist).ravel() return final_err
def _residual_multivariate( self, estimate, unique_R, inds, X, W, template_centers, template_centers_mean_cov, template_widths, template_widths_mean_var_reci, data_sigma)
Residual function for estimating centers and widths Parameters ---------- estimate : 1D array Initial estimation on centers unique_R : a list of array, Each element contains unique value in one dimension of coordinate matrix R. inds : a list of array, Each element contains the indices to reconstruct one dimension of original cooridnate matrix from the unique array. X : 2D array, with shape [n_voxel, n_tr] fMRI data from one subject. W : 2D array, with shape [K, n_tr] The weight matrix. template_centers: 2D array, with shape [K, n_dim] The template prior on centers template_centers_mean_cov: 2D array, with shape [K, cov_size] The template prior on covariance of centers' mean template_widths: 1D array The template prior on widths template_widths_mean_var_reci: 1D array The reciprocal of template prior on variance of widths' mean data_sigma: float The variance of X. Returns ------- final_err : 1D array The residual function for estimating centers.
4.738464
4.440427
1.067119
# least_squares only accept x in 1D format init_estimate = np.hstack( (init_centers.ravel(), init_widths.ravel())) # .copy() data_sigma = 1.0 / math.sqrt(2.0) * np.std(X) final_estimate = least_squares( self._residual_multivariate, init_estimate, args=( unique_R, inds, X, W, template_centers, template_widths, template_centers_mean_cov, template_widths_mean_var_reci, data_sigma), method=self.nlss_method, loss=self.nlss_loss, bounds=self.bounds, verbose=0, x_scale=self.x_scale, tr_solver=self.tr_solver) return final_estimate.x, final_estimate.cost
def _estimate_centers_widths( self, unique_R, inds, X, W, init_centers, init_widths, template_centers, template_widths, template_centers_mean_cov, template_widths_mean_var_reci)
Estimate centers and widths Parameters ---------- unique_R : a list of array, Each element contains unique value in one dimension of coordinate matrix R. inds : a list of array, Each element contains the indices to reconstruct one dimension of original cooridnate matrix from the unique array. X : 2D array, with shape [n_voxel, n_tr] fMRI data from one subject. W : 2D array, with shape [K, n_tr] The weight matrix. init_centers : 2D array, with shape [K, n_dim] The initial values of centers. init_widths : 1D array The initial values of widths. template_centers: 1D array The template prior on centers template_widths: 1D array The template prior on widths template_centers_mean_cov: 2D array, with shape [K, cov_size] The template prior on centers' mean template_widths_mean_var_reci: 1D array The reciprocal of template prior on variance of widths' mean Returns ------- final_estimate.x: 1D array The newly estimated centers and widths. final_estimate.cost: float The cost value.
3.152459
3.183125
0.990366
if template_prior is None: template_centers = None template_widths = None template_centers_mean_cov = None template_widths_mean_var_reci = None else: template_centers = self.get_centers(template_prior) template_widths = self.get_widths(template_prior) template_centers_mean_cov =\ self.get_centers_mean_cov(template_prior) template_widths_mean_var_reci = 1.0 /\ self.get_widths_mean_var(template_prior) inner_converged = False np.random.seed(self.seed) n = 0 while n < self.miter and not inner_converged: self._fit_tfa_inner( data, R, template_centers, template_widths, template_centers_mean_cov, template_widths_mean_var_reci) self._assign_posterior() inner_converged, _ = self._converged() if not inner_converged: self.local_prior = self.local_posterior_ else: logger.info("TFA converged at %d iteration." % (n)) n += 1 gc.collect() return self
def _fit_tfa(self, data, R, template_prior=None)
TFA main algorithm Parameters ---------- data: 2D array, in shape [n_voxel, n_tr] The fMRI data from one subject. R : 2D array, in shape [n_voxel, n_dim] The voxel coordinate matrix of fMRI data template_prior : 1D array, The template prior on centers and widths. Returns ------- TFA Returns the instance itself.
2.773533
2.846088
0.974507
unique_R = [] inds = [] for d in np.arange(self.n_dim): tmp_unique, tmp_inds = np.unique(R[:, d], return_inverse=True) unique_R.append(tmp_unique) inds.append(tmp_inds) return unique_R, inds
def get_unique_R(self, R)
Get unique vlaues from coordinate matrix Parameters ---------- R : 2D array The coordinate matrix of a subject's fMRI data Return ------ unique_R : a list of array, Each element contains unique value in one dimension of coordinate matrix R. inds : a list of array, Each element contains the indices to reconstruct one dimension of original cooridnate matrix from the unique array.
2.268545
2.079082
1.091129
nfeature = data.shape[0] nsample = data.shape[1] feature_indices =\ np.random.choice(nfeature, self.max_num_voxel, replace=False) sample_features = np.zeros(nfeature).astype(bool) sample_features[feature_indices] = True samples_indices =\ np.random.choice(nsample, self.max_num_tr, replace=False) curr_data = np.zeros((self.max_num_voxel, self.max_num_tr))\ .astype(float) curr_data = data[feature_indices] curr_data = curr_data[:, samples_indices].copy() curr_R = R[feature_indices].copy() centers = self.get_centers(self.local_prior) widths = self.get_widths(self.local_prior) unique_R, inds = self.get_unique_R(curr_R) F = self.get_factors(unique_R, inds, centers, widths) W = self.get_weights(curr_data, F) self.local_posterior_, self.total_cost = self._estimate_centers_widths( unique_R, inds, curr_data, W, centers, widths, template_centers, template_centers_mean_cov, template_widths, template_widths_mean_var_reci) return self
def _fit_tfa_inner( self, data, R, template_centers, template_widths, template_centers_mean_cov, template_widths_mean_var_reci)
Fit TFA model, the inner loop part Parameters ---------- data: 2D array, in shape [n_voxel, n_tr] The fMRI data of a subject R : 2D array, in shape [n_voxel, n_dim] The voxel coordinate matrix of fMRI data template_centers: 1D array The template prior on centers template_widths: 1D array The template prior on widths template_centers_mean_cov: 2D array, with shape [K, cov_size] The template prior on covariance of centers' mean template_widths_mean_var_reci: 1D array The reciprocal of template prior on variance of widths' mean Returns ------- TFA Returns the instance itself.
2.981094
2.880653
1.034867
if self.verbose: logger.info('Start to fit TFA ') if not isinstance(X, np.ndarray): raise TypeError("Input data should be an array") if X.ndim != 2: raise TypeError("Input data should be 2D array") if not isinstance(R, np.ndarray): raise TypeError("Input coordinate matrix should be an array") if R.ndim != 2: raise TypeError("Input coordinate matrix should be 2D array") if X.shape[0] != R.shape[0]: raise TypeError( "The number of voxels should be the same in X and R!") if self.weight_method != 'rr' and self.weight_method != 'ols': raise ValueError( "only 'rr' and 'ols' are accepted as weight_method!") # main algorithm self.n_dim = R.shape[1] self.cov_vec_size = np.sum(np.arange(self.n_dim) + 1) self.map_offset = self.get_map_offset() self.bounds = self.get_bounds(R) n_voxel = X.shape[0] n_tr = X.shape[1] self.sample_scaling = 0.5 * float( self.max_num_voxel * self.max_num_tr) / float(n_voxel * n_tr) if template_prior is None: self.init_prior(R) else: self.local_prior = template_prior[0: self.map_offset[2]] self._fit_tfa(X, R, template_prior) if template_prior is None: centers = self.get_centers(self.local_posterior_) widths = self.get_widths(self.local_posterior_) unique_R, inds = self.get_unique_R(R) self.F_ = self.get_factors(unique_R, inds, centers, widths) self.W_ = self.get_weights(X, self.F_) return self
def fit(self, X, R, template_prior=None)
Topographical Factor Analysis (TFA)[Manning2014] Parameters ---------- X : 2D array, in shape [n_voxel, n_sample] The fMRI data of one subject R : 2D array, in shape [n_voxel, n_dim] The voxel coordinate matrix of fMRI data template_prior : None or 1D array The template prior as an extra constraint None when fitting TFA alone
3.133541
2.927403
1.070416
recon = F.dot(W).ravel() err = mean_squared_error( data.ravel(), recon, multioutput='uniform_average') return math.sqrt(err)
def recon_err(data, F, W)
Calcuate reconstruction error Parameters ---------- data : 2D array True data to recover. F : 2D array HTFA factor matrix. W : 2D array HTFA weight matrix. Returns ------- float Returns root mean squared reconstruction error.
4.492583
6.174216
0.727636
W = htfa.get_weights(data, F) return recon_err(data, F, W)
def get_train_err(htfa, data, F)
Calcuate training error Parameters ---------- htfa : HTFA An instance of HTFA, factor anaysis class in BrainIAK. data : 2D array Input data to HTFA. F : 2D array HTFA factor matrix. Returns ------- float Returns root mean squared error on training.
5.959761
8.76231
0.680159
clf = bcast_var[2] data = l[0][mask, :].T # print(l[0].shape, mask.shape, data.shape) skf = model_selection.StratifiedKFold(n_splits=bcast_var[1], shuffle=False) accuracy = np.mean(model_selection.cross_val_score(clf, data, y=bcast_var[0], cv=skf, n_jobs=1)) return accuracy
def _sfn(l, mask, myrad, bcast_var)
Score classifier on searchlight data using cross-validation. The classifier is in `bcast_var[2]`. The labels are in `bast_var[0]`. The number of cross-validation folds is in `bast_var[1].
3.48752
2.863548
1.217902
rank = MPI.COMM_WORLD.Get_rank() if rank == 0: logger.info( 'running activity-based voxel selection via Searchlight' ) self.sl.distribute([self.data], self.mask) self.sl.broadcast((self.labels, self.num_folds, clf)) if rank == 0: logger.info( 'data preparation done' ) # obtain a 3D array with accuracy numbers result_volume = self.sl.run_searchlight(_sfn) # get result tuple list from the volume result_list = result_volume[self.mask] results = [] if rank == 0: for idx, value in enumerate(result_list): if value is None: value = 0 results.append((idx, value)) # Sort the voxels results.sort(key=lambda tup: tup[1], reverse=True) logger.info( 'activity-based voxel selection via Searchlight is done' ) return result_volume, results
def run(self, clf)
run activity-based voxel selection Sort the voxels based on the cross-validation accuracy of their activity vectors within the searchlight Parameters ---------- clf: classification function the classifier to be used in cross validation Returns ------- result_volume: 3D array of accuracy numbers contains the voxelwise accuracy numbers obtained via Searchlight results: list of tuple (voxel_id, accuracy) the accuracy numbers of all voxels, in accuracy descending order the length of array equals the number of voxels
5.902512
4.261989
1.38492
# no shuffling in cv skf = model_selection.StratifiedKFold(n_splits=num_folds, shuffle=False) scores = model_selection.cross_val_score(clf, subject_data, y=labels, cv=skf, n_jobs=1) logger.debug( 'cross validation for voxel %d is done' % vid ) return (vid, scores.mean())
def _cross_validation_for_one_voxel(clf, vid, num_folds, subject_data, labels)
Score classifier on data using cross validation.
3.141211
3.234481
0.971164
rank = MPI.COMM_WORLD.Get_rank() if rank == self.master_rank: results = self._master() # Sort the voxels results.sort(key=lambda tup: tup[1], reverse=True) else: self._worker(clf) results = [] return results
def run(self, clf)
Run correlation-based voxel selection in master-worker model. Sort the voxels based on the cross-validation accuracy of their correlation vectors Parameters ---------- clf: classification function the classifier to be used in cross validation Returns ------- results: list of tuple (voxel_id, accuracy) the accuracy numbers of all voxels, in accuracy descending order the length of array equals the number of voxels
5.18786
4.250209
1.220613
logger.info( 'Master at rank %d starts to allocate tasks', MPI.COMM_WORLD.Get_rank() ) results = [] comm = MPI.COMM_WORLD size = comm.Get_size() sending_voxels = self.voxel_unit if self.voxel_unit < self.num_voxels \ else self.num_voxels current_task = (0, sending_voxels) status = MPI.Status() # using_size is used when the number of tasks # is smaller than the number of workers using_size = size for i in range(0, size): if i == self.master_rank: continue if current_task[1] == 0: using_size = i break logger.debug( 'master starts to send a task to worker %d' % i ) comm.send(current_task, dest=i, tag=self._WORKTAG) next_start = current_task[0] + current_task[1] sending_voxels = self.voxel_unit \ if self.voxel_unit < self.num_voxels - next_start \ else self.num_voxels - next_start current_task = (next_start, sending_voxels) while using_size == size: if current_task[1] == 0: break result = comm.recv(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG, status=status) results += result comm.send(current_task, dest=status.Get_source(), tag=self._WORKTAG) next_start = current_task[0] + current_task[1] sending_voxels = self.voxel_unit \ if self.voxel_unit < self.num_voxels - next_start \ else self.num_voxels - next_start current_task = (next_start, sending_voxels) for i in range(0, using_size): if i == self.master_rank: continue result = comm.recv(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG) results += result for i in range(0, size): if i == self.master_rank: continue comm.send(None, dest=i, tag=self._TERMINATETAG) return results
def _master(self)
Master node's operation. Assigning tasks to workers and collecting results from them Parameters ---------- None Returns ------- results: list of tuple (voxel_id, accuracy) the accuracy numbers of all voxels, in accuracy descending order the length of array equals the number of voxels
2.292363
2.202826
1.040646
logger.debug( 'worker %d is running, waiting for tasks from master at rank %d' % (MPI.COMM_WORLD.Get_rank(), self.master_rank) ) comm = MPI.COMM_WORLD status = MPI.Status() while 1: task = comm.recv(source=self.master_rank, tag=MPI.ANY_TAG, status=status) if status.Get_tag(): break comm.send(self._voxel_scoring(task, clf), dest=self.master_rank)
def _worker(self, clf)
Worker node's operation. Receiving tasks from the master to process and sending the result back Parameters ---------- clf: classification function the classifier to be used in cross validation Returns ------- None
3.664976
3.575085
1.025144
time1 = time.time() s = task[0] nEpochs = len(self.raw_data) logger.debug( 'start to compute the correlation: #epochs: %d, ' '#processed voxels: %d, #total voxels to compute against: %d' % (nEpochs, task[1], self.num_voxels2) ) corr = np.zeros((task[1], nEpochs, self.num_voxels2), np.float32, order='C') count = 0 for i in range(len(self.raw_data)): mat = self.raw_data[i] mat2 = self.raw_data2[i] if self.raw_data2 is not None else mat no_trans = 'N' trans = 'T' blas.compute_self_corr_for_voxel_sel(no_trans, trans, self.num_voxels2, task[1], mat.shape[0], 1.0, mat2, self.num_voxels2, s, mat, self.num_voxels, 0.0, corr, self.num_voxels2 * nEpochs, count) count += 1 time2 = time.time() logger.debug( 'correlation computation for %d voxels, takes %.2f s' % (task[1], (time2 - time1)) ) return corr
def _correlation_computation(self, task)
Use BLAS API to do correlation computation (matrix multiplication). Parameters ---------- task: tuple (start_voxel_id, num_processed_voxels) depicting the voxels assigned to compute Returns ------- corr: 3D array in shape [num_processed_voxels, num_epochs, num_voxels] the correlation values of all subjects in all epochs for the assigned values, in row-major corr[i, e, s + j] = corr[j, e, s + i]
3.581337
3.20664
1.11685
time1 = time.time() (sv, e, av) = corr.shape for i in range(sv): start = 0 while start < e: cur_val = corr[i, start: start + self.epochs_per_subj, :] cur_val = .5 * np.log((cur_val + 1) / (1 - cur_val)) corr[i, start: start + self.epochs_per_subj, :] = \ zscore(cur_val, axis=0, ddof=0) start += self.epochs_per_subj # if zscore fails (standard deviation is zero), # set all values to be zero corr = np.nan_to_num(corr) time2 = time.time() logger.debug( 'within-subject normalization for %d voxels ' 'using numpy zscore function, takes %.2f s' % (sv, (time2 - time1)) ) return corr
def _correlation_normalization(self, corr)
Do within-subject normalization. This method uses scipy.zscore to normalize the data, but is much slower than its C++ counterpart. It is doing in-place z-score. Parameters ---------- corr: 3D array in shape [num_processed_voxels, num_epochs, num_voxels] the correlation values of all subjects in all epochs for the assigned values, in row-major Returns ------- corr: 3D array in shape [num_processed_voxels, num_epochs, num_voxels] the normalized correlation values of all subjects in all epochs for the assigned values, in row-major
4.064946
3.53786
1.148984
time1 = time.time() (num_processed_voxels, num_epochs, _) = corr.shape if isinstance(clf, sklearn.svm.SVC) and clf.kernel == 'precomputed': # kernel matrices should be computed kernel_matrices = np.zeros((num_processed_voxels, num_epochs, num_epochs), np.float32, order='C') for i in range(num_processed_voxels): blas.compute_kernel_matrix('L', 'T', num_epochs, self.num_voxels2, 1.0, corr, i, self.num_voxels2, 0.0, kernel_matrices[i, :, :], num_epochs) # shrink the values for getting more stable alpha values # in SVM training iteration num_digits = len(str(int(kernel_matrices[i, 0, 0]))) if num_digits > 2: proportion = 10**(2-num_digits) kernel_matrices[i, :, :] *= proportion data = kernel_matrices else: data = corr time2 = time.time() logger.debug( 'cross validation data preparation takes %.2f s' % (time2 - time1) ) return data
def _prepare_for_cross_validation(self, corr, clf)
Prepare data for voxelwise cross validation. If the classifier is sklearn.svm.SVC with precomputed kernel, the kernel matrix of each voxel is computed, otherwise do nothing. Parameters ---------- corr: 3D array in shape [num_processed_voxels, num_epochs, num_voxels] the normalized correlation values of all subjects in all epochs for the assigned values, in row-major clf: classification function the classifier to be used in cross validation Returns ------- data: 3D numpy array If using sklearn.svm.SVC with precomputed kernel, it is in shape [num_processed_voxels, num_epochs, num_epochs]; otherwise it is the input argument corr, in shape [num_processed_voxels, num_epochs, num_voxels]
4.42861
3.644763
1.215061
time1 = time.time() if isinstance(clf, sklearn.svm.SVC) and clf.kernel == 'precomputed'\ and self.use_multiprocessing: inlist = [(clf, i + task[0], self.num_folds, data[i, :, :], self.labels) for i in range(task[1])] with multiprocessing.Pool(self.process_num) as pool: results = list(pool.starmap(_cross_validation_for_one_voxel, inlist)) else: results = [] for i in range(task[1]): result = _cross_validation_for_one_voxel(clf, i + task[0], self.num_folds, data[i, :, :], self.labels) results.append(result) time2 = time.time() logger.debug( 'cross validation for %d voxels, takes %.2f s' % (task[1], (time2 - time1)) ) return results
def _do_cross_validation(self, clf, data, task)
Run voxelwise cross validation based on correlation vectors. clf: classification function the classifier to be used in cross validation data: 3D numpy array If using sklearn.svm.SVC with precomputed kernel, it is in shape [num_processed_voxels, num_epochs, num_epochs]; otherwise it is the input argument corr, in shape [num_processed_voxels, num_epochs, num_voxels] task: tuple (start_voxel_id, num_processed_voxels) depicting the voxels assigned to compute Returns ------- results: list of tuple (voxel_id, accuracy) the accuracy numbers of all voxels, in accuracy descending order the length of array equals the number of assigned voxels
2.844017
2.541203
1.119162
time1 = time.time() # correlation computation corr = self._correlation_computation(task) # normalization # corr = self._correlation_normalization(corr) time3 = time.time() fcma_extension.normalization(corr, self.epochs_per_subj) time4 = time.time() logger.debug( 'within-subject normalization for %d voxels ' 'using C++, takes %.2f s' % (task[1], (time4 - time3)) ) # cross validation data = self._prepare_for_cross_validation(corr, clf) if isinstance(clf, sklearn.svm.SVC) and clf.kernel == 'precomputed': # to save memory so that the process can be forked del corr results = self._do_cross_validation(clf, data, task) time2 = time.time() logger.info( 'in rank %d, task %d takes %.2f s' % (MPI.COMM_WORLD.Get_rank(), (int(task[0] / self.voxel_unit)), (time2 - time1)) ) return results
def _voxel_scoring(self, task, clf)
The voxel selection process done in the worker node. Take the task in, do analysis on voxels specified by the task (voxel id, num_voxels) It is a three-stage pipeline consisting of: 1. correlation computation 2. within-subject normalization 3. voxelwise cross validation Parameters ---------- task: tuple (start_voxel_id, num_processed_voxels), depicting the voxels assigned to compute clf: classification function the classifier to be used in cross validation Returns ------- results: list of tuple (voxel_id, accuracy) the accuracy numbers of all voxels, in accuracy descending order the length of array equals the number of assigned voxels
5.239889
4.504151
1.163346
logger.info('Starting SS-SRM') # Check that the alpha value is in range (0.0,1.0) if 0.0 >= self.alpha or self.alpha >= 1.0: raise ValueError("Alpha parameter should be in range (0.0, 1.0)") # Check that the regularizer value is positive if 0.0 >= self.gamma: raise ValueError("Gamma parameter should be positive.") # Check the number of subjects if len(X) <= 1 or len(y) <= 1 or len(Z) <= 1: raise ValueError("There are not enough subjects in the input " "data to train the model.") if not (len(X) == len(y)) or not (len(X) == len(Z)): raise ValueError("Different number of subjects in data.") # Check for input data sizes if X[0].shape[1] < self.features: raise ValueError( "There are not enough samples to train the model with " "{0:d} features.".format(self.features)) # Check if all subjects have same number of TRs for alignment # and if alignment and classification data have the same number of # voxels per subject. Also check that there labels for all the classif. # sample number_trs = X[0].shape[1] number_subjects = len(X) for subject in range(number_subjects): assert_all_finite(X[subject]) assert_all_finite(Z[subject]) if X[subject].shape[1] != number_trs: raise ValueError("Different number of alignment samples " "between subjects.") if X[subject].shape[0] != Z[subject].shape[0]: raise ValueError("Different number of voxels between alignment" " and classification data (subject {0:d})" ".".format(subject)) if Z[subject].shape[1] != y[subject].size: raise ValueError("Different number of samples and labels in " "subject {0:d}.".format(subject)) # Map the classes to [0..C-1] new_y = self._init_classes(y) # Run SS-SRM self.w_, self.s_, self.theta_, self.bias_ = self._sssrm(X, Z, new_y) return self
def fit(self, X, y, Z)
Compute the Semi-Supervised Shared Response Model Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, n_align] Each element in the list contains the fMRI data for alignment of one subject. There are n_align samples for each subject. y : list of arrays of int, element i has shape=[samples_i] Each element in the list contains the labels for the data samples in Z. Z : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject for training the MLR classifier.
3.446848
3.195484
1.078662
self.classes_ = unique_labels(utils.concatenate_not_none(y)) new_y = [None] * len(y) for s in range(len(y)): new_y[s] = np.digitize(y[s], self.classes_) - 1 return new_y
def _init_classes(self, y)
Map all possible classes to the range [0,..,C-1] Parameters ---------- y : list of arrays of int, each element has shape=[samples_i,] Labels of the samples for each subject Returns ------- new_y : list of arrays of int, each element has shape=[samples_i,] Mapped labels of the samples for each subject Note ---- The mapping of the classes is saved in the attribute classes_.
3.81769
3.318319
1.150489
# Check if the model exist if hasattr(self, 'w_') is False: raise NotFittedError("The model fit has not been run yet.") # Check the number of subjects if len(X) != len(self.w_): raise ValueError("The number of subjects does not match the one" " in the model.") X_shared = self.transform(X) p = [None] * len(X_shared) for subject in range(len(X_shared)): sumexp, _, exponents = utils.sumexp_stable( self.theta_.T.dot(X_shared[subject]) + self.bias_) p[subject] = self.classes_[ (exponents / sumexp[np.newaxis, :]).argmax(axis=0)] return p
def predict(self, X)
Classify the output for given data Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject The number of voxels should be according to each subject at the moment of training the model. Returns ------- p: list of arrays, element i has shape=[samples_i] Predictions for each data sample.
3.997304
3.852075
1.037702
classes = self.classes_.size # Initialization: self.random_state_ = np.random.RandomState(self.rand_seed) random_states = [ np.random.RandomState(self.random_state_.randint(2**32)) for i in range(len(data_align))] # Set Wi's to a random orthogonal voxels by TRs w, _ = srm._init_w_transforms(data_align, self.features, random_states) # Initialize the shared response S s = SSSRM._compute_shared_response(data_align, w) # Initialize theta and bias theta, bias = self._update_classifier(data_sup, labels, w, classes) # calculate and print the objective function if logger.isEnabledFor(logging.INFO): objective = self._objective_function(data_align, data_sup, labels, w, s, theta, bias) logger.info('Objective function %f' % objective) # Main loop: for iteration in range(self.n_iter): logger.info('Iteration %d' % (iteration + 1)) # Update the mappings Wi w = self._update_w(data_align, data_sup, labels, w, s, theta, bias) # Output the objective function if logger.isEnabledFor(logging.INFO): objective = self._objective_function(data_align, data_sup, labels, w, s, theta, bias) logger.info('Objective function after updating Wi %f' % objective) # Update the shared response S s = SSSRM._compute_shared_response(data_align, w) # Output the objective function if logger.isEnabledFor(logging.INFO): objective = self._objective_function(data_align, data_sup, labels, w, s, theta, bias) logger.info('Objective function after updating S %f' % objective) # Update the MLR classifier, theta and bias theta, bias = self._update_classifier(data_sup, labels, w, classes) # Output the objective function if logger.isEnabledFor(logging.INFO): objective = self._objective_function(data_align, data_sup, labels, w, s, theta, bias) logger.info('Objective function after updating MLR %f' % objective) return w, s, theta, bias
def _sssrm(self, data_align, data_sup, labels)
Block-Coordinate Descent algorithm for fitting SS-SRM. Parameters ---------- data_align : list of 2D arrays, element i has shape=[voxels_i, n_align] Each element in the list contains the fMRI data for alignment of one subject. There are n_align samples for each subject. data_sup : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject for the classification task. labels : list of arrays of int, element i has shape=[samples_i] Each element in the list contains the labels for the data samples in data_sup. Returns ------- w : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. s : array, shape=[features, samples] The shared response.
2.677859
2.503807
1.069515
# Stack the data and labels for training the classifier data_stacked, labels_stacked, weights = \ SSSRM._stack_list(data, labels, w) features = w[0].shape[1] total_samples = weights.size data_th = S.shared(data_stacked.astype(theano.config.floatX)) val_ = S.shared(labels_stacked) total_samples_S = S.shared(total_samples) theta_th = T.matrix(name='theta', dtype=theano.config.floatX) bias_th = T.col(name='bias', dtype=theano.config.floatX) constf2 = S.shared(self.alpha / self.gamma, allow_downcast=True) weights_th = S.shared(weights) log_p_y_given_x = \ T.log(T.nnet.softmax((theta_th.T.dot(data_th.T)).T + bias_th.T)) f = -constf2 * T.sum((log_p_y_given_x[T.arange(total_samples_S), val_]) / weights_th) + 0.5 * T.sum(theta_th ** 2) manifold = Product((Euclidean(features, classes), Euclidean(classes, 1))) problem = Problem(manifold=manifold, cost=f, arg=[theta_th, bias_th], verbosity=0) solver = ConjugateGradient(mingradnorm=1e-6) solution = solver.solve(problem) theta = solution[0] bias = solution[1] del constf2 del theta_th del bias_th del data_th del val_ del solver del solution return theta, bias
def _update_classifier(self, data, labels, w, classes)
Update the classifier parameters theta and bias Parameters ---------- data : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject for the classification task. labels : list of arrays of int, element i has shape=[samples_i] Each element in the list contains the labels for the data samples in data_sup. w : list of 2D array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. classes : int The number of classes in the classifier. Returns ------- theta : array, shape=[features, classes] The MLR parameter for the class planes. bias : array shape=[classes,] The MLR parameter for class biases.
3.98794
3.847744
1.036436
s = np.zeros((w[0].shape[1], data[0].shape[1])) for m in range(len(w)): s = s + w[m].T.dot(data[m]) s /= len(w) return s
def _compute_shared_response(data, w)
Compute the shared response S Parameters ---------- data : list of 2D arrays, element i has shape=[voxels_i, samples] Each element in the list contains the fMRI data of one subject. w : list of 2D arrays, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. Returns ------- s : array, shape=[features, samples] The shared response for the subjects data with the mappings in w.
2.863352
2.609467
1.097294
subjects = len(data_align) # Compute the SRM loss f_val = 0.0 for subject in range(subjects): samples = data_align[subject].shape[1] f_val += (1 - self.alpha) * (0.5 / samples) \ * np.linalg.norm(data_align[subject] - w[subject].dot(s), 'fro')**2 # Compute the MLR loss f_val += self._loss_lr(data_sup, labels, w, theta, bias) return f_val
def _objective_function(self, data_align, data_sup, labels, w, s, theta, bias)
Compute the objective function of the Semi-Supervised SRM See :eq:`sssrm-eq`. Parameters ---------- data_align : list of 2D arrays, element i has shape=[voxels_i, n_align] Each element in the list contains the fMRI data for alignment of one subject. There are n_align samples for each subject. data_sup : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject for the classification task. labels : list of arrays of int, element i has shape=[samples_i] Each element in the list contains the labels for the data samples in data_sup. w : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. s : array, shape=[features, samples] The shared response. theta : array, shape=[classes, features] The MLR class plane parameters. bias : array, shape=[classes] The MLR class biases. Returns ------- f_val : float The SS-SRM objective function evaluated based on the parameters to this function.
4.038027
3.565099
1.132655
# Compute the SRM loss f_val = 0.0 samples = data_align.shape[1] f_val += (1 - self.alpha) * (0.5 / samples) \ * np.linalg.norm(data_align - w.dot(s), 'fro')**2 # Compute the MLR loss f_val += self._loss_lr_subject(data_sup, labels, w, theta, bias) return f_val
def _objective_function_subject(self, data_align, data_sup, labels, w, s, theta, bias)
Compute the objective function for one subject. .. math:: (1-C)*Loss_{SRM}_i(W_i,S;X_i) .. math:: + C/\\gamma * Loss_{MLR_i}(\\theta, bias; {(W_i^T*Z_i, y_i}) .. math:: + R(\\theta) Parameters ---------- data_align : 2D array, shape=[voxels_i, samples_align] Contains the fMRI data for alignment of subject i. data_sup : 2D array, shape=[voxels_i, samples_i] Contains the fMRI data of one subject for the classification task. labels : array of int, shape=[samples_i] The labels for the data samples in data_sup. w : array, shape=[voxels_i, features] The orthogonal transform (mapping) :math:`W_i` for subject i. s : array, shape=[features, samples] The shared response. theta : array, shape=[classes, features] The MLR class plane parameters. bias : array, shape=[classes] The MLR class biases. Returns ------- f_val : float The SS-SRM objective function for subject i evaluated on the parameters to this function.
4.451812
3.707336
1.200811
if data is None: return 0.0 samples = data.shape[1] thetaT_wi_zi_plus_bias = theta.T.dot(w.T.dot(data)) + bias sum_exp, max_value, _ = utils.sumexp_stable(thetaT_wi_zi_plus_bias) sum_exp_values = np.log(sum_exp) + max_value aux = 0.0 for sample in range(samples): label = labels[sample] aux += thetaT_wi_zi_plus_bias[label, sample] return self.alpha / samples / self.gamma * (sum_exp_values.sum() - aux)
def _loss_lr_subject(self, data, labels, w, theta, bias)
Compute the Loss MLR for a single subject (without regularization) Parameters ---------- data : array, shape=[voxels, samples] The fMRI data of subject i for the classification task. labels : array of int, shape=[samples] The labels for the data samples in data. w : array, shape=[voxels, features] The orthogonal transform (mapping) :math:`W_i` for subject i. theta : array, shape=[classes, features] The MLR class plane parameters. bias : array, shape=[classes] The MLR class biases. Returns ------- loss : float The loss MLR for the subject
4.241104
4.657131
0.910669
subjects = len(data) loss = 0.0 for subject in range(subjects): if labels[subject] is not None: loss += self._loss_lr_subject(data[subject], labels[subject], w[subject], theta, bias) return loss + 0.5 * np.linalg.norm(theta, 'fro')**2
def _loss_lr(self, data, labels, w, theta, bias)
Compute the Loss MLR (with the regularization) Parameters ---------- data : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject for the classification task. labels : list of arrays of int, element i has shape=[samples_i] Each element in the list contains the labels for the samples in data. w : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. theta : array, shape=[classes, features] The MLR class plane parameters. bias : array, shape=[classes] The MLR class biases. Returns ------- loss : float The loss MLR for the SS-SRM model
2.92911
3.024371
0.968502
labels_stacked = utils.concatenate_not_none(data_labels) weights = np.empty((labels_stacked.size,)) data_shared = [None] * len(data) curr_samples = 0 for s in range(len(data)): if data[s] is not None: subject_samples = data[s].shape[1] curr_samples_end = curr_samples + subject_samples weights[curr_samples:curr_samples_end] = subject_samples data_shared[s] = w[s].T.dot(data[s]) curr_samples += data[s].shape[1] data_stacked = utils.concatenate_not_none(data_shared, axis=1).T return data_stacked, labels_stacked, weights
def _stack_list(data, data_labels, w)
Construct a numpy array by stacking arrays in a list Parameter ---------- data : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject for the classification task. data_labels : list of arrays of int, element i has shape=[samples_i] Each element in the list contains the labels for the samples in data. w : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. Returns ------- data_stacked : 2D array, shape=[samples, features] The data samples from all subjects are stacked into a single 2D array, where "samples" is the sum of samples_i. labels_stacked : array, shape=[samples,] The labels from all subjects are stacked into a single array, where "samples" is the sum of samples_i. weights : array, shape=[samples,] The number of samples of the subject that are related to that sample. They become a weight per sample in the MLR loss.
3.378434
3.010392
1.122257
voxel_fn = extra_params[0] shape_mask = extra_params[1] min_active_voxels_proportion = extra_params[2] outmat = np.empty(msk.shape, dtype=np.object)[mysl_rad:-mysl_rad, mysl_rad:-mysl_rad, mysl_rad:-mysl_rad] for i in range(0, outmat.shape[0]): for j in range(0, outmat.shape[1]): for k in range(0, outmat.shape[2]): if msk[i+mysl_rad, j+mysl_rad, k+mysl_rad]: searchlight_slice = np.s_[ i:i+2*mysl_rad+1, j:j+2*mysl_rad+1, k:k+2*mysl_rad+1] voxel_fn_mask = msk[searchlight_slice] * shape_mask if (min_active_voxels_proportion == 0 or np.count_nonzero(voxel_fn_mask) / voxel_fn_mask.size > min_active_voxels_proportion): outmat[i, j, k] = voxel_fn( [ll[searchlight_slice] for ll in l], msk[searchlight_slice] * shape_mask, mysl_rad, bcast_var) return outmat
def _singlenode_searchlight(l, msk, mysl_rad, bcast_var, extra_params)
Run searchlight function on block data in parallel. `extra_params` contains: - Searchlight function. - `Shape` mask. - Minimum active voxels proportion required to run the searchlight function.
2.25598
2.018317
1.117753
rank = self.comm.rank B = [(rank, idx) for (idx, c) in enumerate(data) if c is not None] C = self.comm.allreduce(B) ownership = [None] * len(data) for c in C: ownership[c[1]] = c[0] return ownership
def _get_ownership(self, data)
Determine on which rank each subject currently resides Parameters ---------- data: list of 4D arrays with subject data Returns ------- list of ranks indicating the owner of each subject
4.349696
4.652554
0.934905
blocks = [] outerblk = self.max_blk_edge + 2*self.sl_rad for i in range(0, mask.shape[0], self.max_blk_edge): for j in range(0, mask.shape[1], self.max_blk_edge): for k in range(0, mask.shape[2], self.max_blk_edge): block_shape = mask[i:i+outerblk, j:j+outerblk, k:k+outerblk ].shape if np.any( mask[i+self.sl_rad:i+block_shape[0]-self.sl_rad, j+self.sl_rad:j+block_shape[1]-self.sl_rad, k+self.sl_rad:k+block_shape[2]-self.sl_rad]): blocks.append(((i, j, k), block_shape)) return blocks
def _get_blocks(self, mask)
Divide the volume into a set of blocks Ignore blocks that have no active voxels in the mask Parameters ---------- mask: a boolean 3D array which is true at every active voxel Returns ------- list of tuples containing block information: - a triple containing top left point of the block and - a triple containing the size in voxels of the block
2.197445
2.131971
1.030711
(pt, sz) = block if len(mat.shape) == 3: return mat[pt[0]:pt[0]+sz[0], pt[1]:pt[1]+sz[1], pt[2]:pt[2]+sz[2]].copy() elif len(mat.shape) == 4: return mat[pt[0]:pt[0]+sz[0], pt[1]:pt[1]+sz[1], pt[2]:pt[2]+sz[2], :].copy()
def _get_block_data(self, mat, block)
Retrieve a block from a 3D or 4D volume Parameters ---------- mat: a 3D or 4D volume block: a tuple containing block information: - a triple containing the lowest-coordinate voxel in the block - a triple containing the size in voxels of the block Returns ------- In the case of a 3D array, a 3D subarray at the block location In the case of a 4D array, a 4D subarray at the block location, including the entire fourth dimension.
1.804486
1.788728
1.00881
return [self._get_block_data(mat, block) for block in blocks]
def _split_volume(self, mat, blocks)
Convert a volume into a list of block data Parameters ---------- mat: A 3D or 4D array to be split blocks: a list of tuples containing block information: - a triple containing the top left point of the block and - a triple containing the size in voxels of the block Returns ------- A list of the subarrays corresponding to each block
5.389194
6.491529
0.830189
rank = self.comm.rank size = self.comm.size subject_submatrices = [] nblocks = self.comm.bcast(len(data) if rank == owner else None, root=owner) # For each submatrix for idx in range(0, nblocks, size): padded = None extra = max(0, idx+size - nblocks) # Pad with "None" so scatter can go to all processes if data is not None: padded = data[idx:idx+size] if extra > 0: padded = padded + [None]*extra # Scatter submatrices to all processes mytrans = self.comm.scatter(padded, root=owner) # Contribute submatrix to subject list if mytrans is not None: subject_submatrices += [mytrans] return subject_submatrices
def _scatter_list(self, data, owner)
Distribute a list from one rank to other ranks in a cyclic manner Parameters ---------- data: list of pickle-able data owner: rank that owns the data Returns ------- A list containing the data in a cyclic layout across ranks
4.662355
4.686542
0.994839
if mask.ndim != 3: raise ValueError('mask should be a 3D array') for (idx, subj) in enumerate(subjects): if subj is not None: if subj.ndim != 4: raise ValueError('subjects[{}] must be 4D'.format(idx)) self.mask = mask rank = self.comm.rank # Get/set ownership ownership = self._get_ownership(subjects) all_blocks = self._get_blocks(mask) if rank == 0 else None all_blocks = self.comm.bcast(all_blocks) # Divide data and mask splitsubj = [self._split_volume(s, all_blocks) if s is not None else None for s in subjects] submasks = self._split_volume(mask, all_blocks) # Scatter points, data, and mask self.blocks = self._scatter_list(all_blocks, 0) self.submasks = self._scatter_list(submasks, 0) self.subproblems = [self._scatter_list(s, ownership[s_idx]) for (s_idx, s) in enumerate(splitsubj)]
def distribute(self, subjects, mask)
Distribute data to MPI ranks Parameters ---------- subjects : list of 4D arrays containing data for one or more subjects. Each entry of the list must be present on at most one rank, and the other ranks contain a "None" at this list location. For example, for 3 ranks you may lay out the data in the following manner: Rank 0: [Subj0, None, None] Rank 1: [None, Subj1, None] Rank 2: [None, None, Subj2] Or alternatively, you may lay out the data in this manner: Rank 0: [Subj0, Subj1, Subj2] Rank 1: [None, None, None] Rank 2: [None, None, None] mask: 3D array with "True" entries at active vertices
3.846568
3.718835
1.034348
rank = self.comm.rank results = [] usable_cpus = usable_cpu_count() if pool_size is None: processes = usable_cpus else: processes = min(pool_size, usable_cpus) if processes > 1: with Pool(processes) as pool: for idx, block in enumerate(self.blocks): result = pool.apply_async( block_fn, ([subproblem[idx] for subproblem in self.subproblems], self.submasks[idx], self.sl_rad, self.bcast_var, extra_block_fn_params)) results.append((block[0], result)) local_outputs = [(result[0], result[1].get()) for result in results] else: # If we only are using one CPU core, no need to create a Pool, # cause an underlying fork(), and send the data to that process. # Just do it here in serial. This will save copying the memory # and will stop a fork() which can cause problems in some MPI # implementations. for idx, block in enumerate(self.blocks): subprob_list = [subproblem[idx] for subproblem in self.subproblems] result = block_fn( subprob_list, self.submasks[idx], self.sl_rad, self.bcast_var, extra_block_fn_params) results.append((block[0], result)) local_outputs = [(result[0], result[1]) for result in results] # Collect results global_outputs = self.comm.gather(local_outputs) # Coalesce results outmat = np.empty(self.mask.shape, dtype=np.object) if rank == 0: for go_rank in global_outputs: for (pt, mat) in go_rank: coords = np.s_[ pt[0]+self.sl_rad:pt[0]+self.sl_rad+mat.shape[0], pt[1]+self.sl_rad:pt[1]+self.sl_rad+mat.shape[1], pt[2]+self.sl_rad:pt[2]+self.sl_rad+mat.shape[2] ] outmat[coords] = mat return outmat
def run_block_function(self, block_fn, extra_block_fn_params=None, pool_size=None)
Perform a function for each block in a volume. Parameters ---------- block_fn: function to apply to each block: Parameters data: list of 4D arrays containing subset of subject data, which is padded with sl_rad voxels. mask: 3D array containing subset of mask data sl_rad: radius, in voxels, of the sphere inscribed in the cube bcast_var: shared data which is broadcast to all processes extra_params: extra parameters Returns 3D array which is the same size as the mask input with padding removed extra_block_fn_params: tuple Extra parameters to pass to the block function pool_size: int Maximum number of processes running the block function in parallel. If None, number of available hardware threads, considering cpusets restrictions.
3.302493
3.07704
1.07327
extra_block_fn_params = (voxel_fn, self.shape, self.min_active_voxels_proportion) block_fn_result = self.run_block_function(_singlenode_searchlight, extra_block_fn_params, pool_size) return block_fn_result
def run_searchlight(self, voxel_fn, pool_size=None)
Perform a function at each voxel which is set to True in the user-provided mask. The mask passed to the searchlight function will be further masked by the user-provided searchlight shape. Parameters ---------- voxel_fn: function to apply at each voxel Must be `serializeable using pickle <https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled>`_. Parameters subj: list of 4D arrays containing subset of subject data mask: 3D array containing subset of mask data sl_rad: radius, in voxels, of the sphere inscribed in the cube bcast_var: shared data which is broadcast to all processes Returns Value of any pickle-able type Returns ------- A volume which is the same size as the mask, however a number of voxels equal to the searchlight radius has been removed from each border of the volume. This volume contains the values returned from the searchlight function at each voxel which was set to True in the mask, and None elsewhere.
6.31097
7.915668
0.797276
shape = data.shape data = zscore(data, axis=axis, ddof=0) # if zscore fails (standard deviation is zero), # optionally set all values to be zero if not return_nans: data = np.nan_to_num(data) data = data / math.sqrt(shape[axis]) return data
def _normalize_for_correlation(data, axis, return_nans=False)
normalize the data before computing correlation The data will be z-scored and divided by sqrt(n) along the assigned axis Parameters ---------- data: 2D array axis: int specify which dimension of the data should be normalized return_nans: bool, default:False If False, return zeros for NaNs; if True, return NaNs Returns ------- data: 2D array the normalized data
4.524957
4.498696
1.005837
matrix1 = matrix1.astype(np.float32) matrix2 = matrix2.astype(np.float32) [r1, d1] = matrix1.shape [r2, d2] = matrix2.shape if d1 != d2: raise ValueError('Dimension discrepancy') # preprocess two components matrix1 = _normalize_for_correlation(matrix1, 1, return_nans=return_nans) matrix2 = _normalize_for_correlation(matrix2, 1, return_nans=return_nans) corr_data = np.empty((r1, r2), dtype=np.float32, order='C') # blas routine is column-major blas.compute_single_matrix_multiplication('T', 'N', r2, r1, d1, 1.0, matrix2, d2, matrix1, d1, 0.0, corr_data, r2) return corr_data
def compute_correlation(matrix1, matrix2, return_nans=False)
compute correlation between two sets of variables Correlate the rows of matrix1 with the rows of matrix2. If matrix1 == matrix2, it is auto-correlation computation resulting in a symmetric correlation matrix. The number of columns MUST agree between set1 and set2. The correlation being computed here is the Pearson's correlation coefficient, which can be expressed as .. math:: corr(X, Y) = \\frac{cov(X, Y)}{\\sigma_X\\sigma_Y} where cov(X, Y) is the covariance of variable X and Y, and .. math:: \\sigma_X is the standard deviation of variable X Reducing the correlation computation to matrix multiplication and using BLAS GEMM API wrapped by Scipy can speedup the numpy built-in correlation computation (numpy.corrcoef) by one order of magnitude .. math:: corr(X, Y) &= \\frac{\\sum\\limits_{i=1}^n (x_i-\\bar{x})(y_i-\\bar{y})}{(n-1) \\sqrt{\\frac{\\sum\\limits_{j=1}^n x_j^2-n\\bar{x}}{n-1}} \\sqrt{\\frac{\\sum\\limits_{j=1}^{n} y_j^2-n\\bar{y}}{n-1}}}\\\\ &= \\sum\\limits_{i=1}^n(\\frac{(x_i-\\bar{x})} {\\sqrt{\\sum\\limits_{j=1}^n x_j^2-n\\bar{x}}} \\frac{(y_i-\\bar{y})}{\\sqrt{\\sum\\limits_{j=1}^n y_j^2-n\\bar{y}}}) By default (return_nans=False), returns zeros for vectors with NaNs. If return_nans=True, convert zeros to NaNs (np.nan) in output. Parameters ---------- matrix1: 2D array in shape [r1, c] MUST be continuous and row-major matrix2: 2D array in shape [r2, c] MUST be continuous and row-major return_nans: bool, default:False If False, return zeros for NaNs; if True, return NaNs Returns ------- corr_data: 2D array in shape [r1, r2] continuous and row-major in np.float32
2.933596
2.955496
0.99259
alpha = 2 tau2 = (y_invK_y + 2 * tau_range**2) / (alpha * 2 + 2 + n_y) log_ptau = scipy.stats.invgamma.logpdf( tau2, scale=tau_range**2, a=2) return tau2, log_ptau
def prior_GP_var_inv_gamma(y_invK_y, n_y, tau_range)
Imposing an inverse-Gamma prior onto the variance (tau^2) parameter of a Gaussian Process, which is in turn a prior imposed over an unknown function y = f(x). The inverse-Gamma prior of tau^2, tau^2 ~ invgamma(shape, scale) is described by a shape parameter alpha=2 and a scale parameter beta=tau_range^2. tau_range describes the reasonable range of tau in the inverse-Gamma prior. The data y's at locations x's are assumed to follow Gaussian Process: f(x, x') ~ N(0, K(x, x') / 2 tau^2), where K is a kernel function defined on x. For n observations, K(x1, x2, ..., xn) is an n by n positive definite matrix. Given the prior parameter tau_range, number of observations n_y, and y_invK_y = y * inv(K) * y', the function returns the MAP estimate of tau^2 and the log posterior probability of tau^2 at the MAP value: log(p(tau^2|tau_range)). This function is written primarily for BRSA but can also be used elsewhere. y in this case corresponds to the log of SNR in each voxel. GBRSA does not rely on this function. An alternative form of prior is half-Cauchy prior on tau. Inverse-Gamma prior penalizes for both very small and very large values of tau, while half-Cauchy prior only penalizes for very large values of tau. For more information on usage, see description in BRSA class: `.BRSA` See also: `.prior_GP_var_half_cauchy` Parameters ---------- y_invK_y: float y * inv(K) * y^T, where y=f(x) is a vector of observations of unknown function f at different locations x. K is correlation matrix of f between different locations, based on a Gaussian Process (GP) describing the smoothness property of f. K fully incorporates the form of the kernel and the length scale of the GP, but not the variance of the GP (the purpose of this function is to estimate the variance). n_y: int, number of observations tau_range: float, The reasonable range of tau, the standard deviation of the Gaussian Process imposed on y=f(x). tau_range is parameter of the inverse-Gamma prior. Say, if you expect the standard deviation of the Gaussian process to be around 3, tau_range can be set to 3. The smaller it is, the more penalization is imposed on large variation of y. Returns ------- tau2: The MAP estimation of tau^2 based on the prior on tau and y_invK_y. log_ptau: log(p(tau)) of the returned tau^2 based on the inverse-Gamma prior.
4.289187
3.707706
1.15683
tau2 = (y_invK_y - n_y * tau_range**2 + np.sqrt(n_y**2 * tau_range**4 + (2 * n_y + 8) * tau_range**2 * y_invK_y + y_invK_y**2))\ / 2 / (n_y + 2) log_ptau = scipy.stats.halfcauchy.logpdf( tau2**0.5, scale=tau_range) return tau2, log_ptau
def prior_GP_var_half_cauchy(y_invK_y, n_y, tau_range)
Imposing a half-Cauchy prior onto the standard deviation (tau) of the Gaussian Process which is in turn a prior imposed over a function y = f(x). The scale parameter of the half-Cauchy prior is tau_range. The function returns the MAP estimate of tau^2 and log(p(tau|tau_range)) for the MAP value of tau^2, where tau_range describes the reasonable range of tau in the half-Cauchy prior. An alternative form of prior is inverse-Gamma prior on tau^2. Inverse-Gamma prior penalizes for both very small and very large values of tau, while half-Cauchy prior only penalizes for very large values of tau. For more information on usage, see description in BRSA class: `.BRSA`
3.434268
3.430038
1.001233
beta = X.shape[0] / X.shape[1] if beta > 1: beta = 1 / beta omega = 0.56 * beta ** 3 - 0.95 * beta ** 2 + 1.82 * beta + 1.43 if zscore: sing = np.linalg.svd(_zscore(X), False, False) else: sing = np.linalg.svd(X, False, False) thresh = omega * np.median(sing) ncomp = int(np.sum(np.logical_and(sing > thresh, np.logical_not( np.isclose(sing, thresh))))) # In the line above, we look for the singular values larger than # the threshold but excluding those that happen to be "just" larger # than the threshold by an amount close to the numerical precision. # This is to prevent close-to-zero singular values to be included if # the median of the eigenvalues is close to 0 (which could happen # when the input X has lower rank than its minimal size. return ncomp
def Ncomp_SVHT_MG_DLD_approx(X, zscore=True)
This function implements the approximate calculation of the optimal hard threshold for singular values, by Matan Gavish and David L. Donoho: "The optimal hard threshold for singular values is 4 / sqrt(3)" http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6846297 Parameters ---------- X: 2-D numpy array of size [n_T, n_V] The data to estimate the optimal rank for selecting principal components. zscore: Boolean Whether to z-score the data before calculating number of components. Returns ------- ncomp: integer The optimal number of components determined by the method of MG and DLD
5.37253
5.586894
0.961631
assert a.ndim > 1, 'a must have more than one dimensions' zscore = scipy.stats.zscore(a, axis=0) zscore[:, np.logical_not(np.all(np.isfinite(zscore), axis=0))] = 0 return zscore
def _zscore(a)
Calculating z-score of data on the first axis. If the numbers in any column are all equal, scipy.stats.zscore will return NaN for this column. We shall correct them all to be zeros. Parameters ---------- a: numpy array Returns ------- zscore: numpy array The z-scores of input "a", with any columns including non-finite numbers replaced by all zeros.
2.968623
2.956703
1.004031
assert X.ndim == 2 and X.shape[1] == self.beta_.shape[1], \ 'The shape of X is not consistent with the shape of data '\ 'used in the fitting step. They should have the same number '\ 'of voxels' assert scan_onsets is None or (scan_onsets.ndim == 1 and 0 in scan_onsets), \ 'scan_onsets should either be None or an array of indices '\ 'If it is given, it should include at least 0' if scan_onsets is None: scan_onsets = np.array([0], dtype=int) else: scan_onsets = np.int32(scan_onsets) ts, ts0, log_p = self._transform( Y=X, scan_onsets=scan_onsets, beta=self.beta_, beta0=self.beta0_, rho_e=self.rho_, sigma_e=self.sigma_, rho_X=self._rho_design_, sigma2_X=self._sigma2_design_, rho_X0=self._rho_X0_, sigma2_X0=self._sigma2_X0_) return ts, ts0
def transform(self, X, y=None, scan_onsets=None)
Use the model to estimate the time course of response to each condition (ts), and the time course unrelated to task (ts0) which is spread across the brain. This is equivalent to "decoding" the design matrix and nuisance regressors from a new dataset different from the training dataset on which fit() was applied. An AR(1) smooth prior is imposed on the decoded ts and ts0 with the AR(1) parameters learnt from the corresponding time courses in the training data. Notice: if you set the rank to be lower than the number of experimental conditions (number of columns in the design matrix), the recovered task-related activity will have collinearity (the recovered time courses of some conditions can be linearly explained by the recovered time courses of other conditions). Parameters ---------- X : numpy arrays, shape=[time_points, voxels] fMRI data of new data of the same subject. The voxels should match those used in the fit() function. If data are z-scored (recommended) when fitting the model, data should be z-scored as well when calling transform() y : not used (as it is unsupervised learning) scan_onsets : numpy array, shape=[number of runs]. A list of indices corresponding to the onsets of scans in the data X. If not provided, data will be assumed to be acquired in a continuous scan. Returns ------- ts : numpy arrays, shape = [time_points, condition] The estimated response to the task conditions which have the response amplitudes estimated during the fit step. ts0: numpy array, shape = [time_points, n_nureg] The estimated time course spread across the brain, with the loading weights estimated during the fit step.
3.472832
3.171684
1.094949
assert X.ndim == 2 and X.shape[1] == self.beta_.shape[1], \ 'The shape of X is not consistent with the shape of data '\ 'used in the fitting step. They should have the same number '\ 'of voxels' assert scan_onsets is None or (scan_onsets.ndim == 1 and 0 in scan_onsets), \ 'scan_onsets should either be None or an array of indices '\ 'If it is given, it should include at least 0' if scan_onsets is None: scan_onsets = np.array([0], dtype=int) else: scan_onsets = np.int32(scan_onsets) ll = self._score(Y=X, design=design, beta=self.beta_, scan_onsets=scan_onsets, beta0=self.beta0_, rho_e=self.rho_, sigma_e=self.sigma_, rho_X0=self._rho_X0_, sigma2_X0=self._sigma2_X0_) ll_null = self._score(Y=X, design=None, beta=None, scan_onsets=scan_onsets, beta0=self.beta0_, rho_e=self.rho_, sigma_e=self.sigma_, rho_X0=self._rho_X0_, sigma2_X0=self._sigma2_X0_) return ll, ll_null
def score(self, X, design, scan_onsets=None)
Use the model and parameters estimated by fit function from some data of a participant to evaluate the log likelihood of some new data of the same participant. Design matrix of the same set of experimental conditions in the testing data should be provided, with each column corresponding to the same condition as that column in the design matrix of the training data. Unknown nuisance time series will be marginalized, assuming they follow the same spatial pattern as in the training data. The hypothetical response captured by the design matrix will be subtracted from data before the marginalization when evaluating the log likelihood. For null model, nothing will be subtracted before marginalization. There is a difference between the form of likelihood function used in fit() and score(). In fit(), the response amplitude beta to design matrix X and the modulation beta0 by nuisance regressor X0 are both marginalized, with X provided and X0 estimated from data. In score(), posterior estimation of beta and beta0 from the fitting step are assumed unchanged to testing data and X0 is marginalized. The logic underlying score() is to transfer as much as what we can learn from training data when calculating a likelihood score for testing data. If you z-scored your data during fit step, you should z-score them for score function as well. If you did not z-score in fitting, you should not z-score here either. Parameters ---------- X : numpy arrays, shape=[time_points, voxels] fMRI data of new data of the same subject. The voxels should match those used in the fit() function. If data are z-scored (recommended) when fitting the model, data should be z-scored as well when calling transform() design : numpy array, shape=[time_points, conditions] Design matrix expressing the hypothetical response of the task conditions in data X. scan_onsets : numpy array, shape=[number of runs]. A list of indices corresponding to the onsets of scans in the data X. If not provided, data will be assumed to be acquired in a continuous scan. Returns ------- ll: float. The log likelihood of the new data based on the model and its parameters fit to the training data. ll_null: float. The log likelihood of the new data based on a null model which assumes the same as the full model for everything except for that there is no response to any of the task conditions.
2.685309
2.454894
1.093859
run_TRs, n_run = self._run_TR_from_scan_onsets(n_T, scan_onsets) D_ele = map(self._D_gen, run_TRs) F_ele = map(self._F_gen, run_TRs) D = scipy.linalg.block_diag(*D_ele) F = scipy.linalg.block_diag(*F_ele) # D and F above are templates for constructing # the inverse of temporal covariance matrix of noise return D, F, run_TRs, n_run
def _prepare_DF(self, n_T, scan_onsets=None)
Prepare the essential template matrices D and F for pre-calculating some terms to be re-used. The inverse covariance matrix of AR(1) noise is sigma^-2 * (I - rho1*D + rho1**2 * F). And we denote A = I - rho1*D + rho1**2 * F
4.413514
3.966551
1.112683
XTY, XTDY, XTFY = self._make_templates(D, F, X, Y) YTY_diag = np.sum(Y * Y, axis=0) YTDY_diag = np.sum(Y * np.dot(D, Y), axis=0) YTFY_diag = np.sum(Y * np.dot(F, Y), axis=0) XTX, XTDX, XTFX = self._make_templates(D, F, X, X) return XTY, XTDY, XTFY, YTY_diag, YTDY_diag, YTFY_diag, XTX, \ XTDX, XTFX
def _prepare_data_XY(self, X, Y, D, F)
Prepares different forms of products of design matrix X and data Y, or between themselves. These products are re-used a lot during fitting. So we pre-calculate them. Because these are reused, it is in principle possible to update the fitting as new data come in, by just incrementally adding the products of new data and their corresponding parts of design matrix to these pre-calculated terms.
2.080883
2.126164
0.978703
X_DC = self._gen_X_DC(run_TRs) reg_sol = np.linalg.lstsq(X_DC, X) if np.any(np.isclose(reg_sol[1], 0)): raise ValueError('Your design matrix appears to have ' 'included baseline time series.' 'Either remove them, or move them to' ' nuisance regressors.') X_DC, X_base, idx_DC = self._merge_DC_to_base(X_DC, X_base, no_DC) if X_res is None: X0 = X_base else: X0 = np.concatenate((X_base, X_res), axis=1) n_X0 = X0.shape[1] X0TX0, X0TDX0, X0TFX0 = self._make_templates(D, F, X0, X0) XTX0, XTDX0, XTFX0 = self._make_templates(D, F, X, X0) X0TY, X0TDY, X0TFY = self._make_templates(D, F, X0, Y) return X0TX0, X0TDX0, X0TFX0, XTX0, XTDX0, XTFX0, \ X0TY, X0TDY, X0TFY, X0, X_base, n_X0, idx_DC
def _prepare_data_XYX0(self, X, Y, X_base, X_res, D, F, run_TRs, no_DC=False)
Prepares different forms of products between design matrix X or data Y or nuisance regressors X0. These products are re-used a lot during fitting. So we pre-calculate them. no_DC means not inserting regressors for DC components into nuisance regressor. It will only take effect if X_base is not None.
2.880663
2.741368
1.050812
if X_base is not None: reg_sol = np.linalg.lstsq(X_DC, X_base) if not no_DC: if not np.any(np.isclose(reg_sol[1], 0)): # No columns in X_base can be explained by the # baseline regressors. So we insert them. X_base = np.concatenate((X_DC, X_base), axis=1) idx_DC = np.arange(0, X_DC.shape[1]) else: logger.warning('Provided regressors for uninteresting ' 'time series already include baseline. ' 'No additional baseline is inserted.') idx_DC = np.where(np.isclose(reg_sol[1], 0))[0] else: idx_DC = np.where(np.isclose(reg_sol[1], 0))[0] else: # If a set of regressors for non-interested signals is not # provided, then we simply include one baseline for each run. X_base = X_DC idx_DC = np.arange(0, X_base.shape[1]) logger.info('You did not provide time series of no interest ' 'such as DC component. Trivial regressors of' ' DC component are included for further modeling.' ' The final covariance matrix won''t ' 'reflect these components.') return X_DC, X_base, idx_DC
def _merge_DC_to_base(self, X_DC, X_base, no_DC)
Merge DC components X_DC to the baseline time series X_base (By baseline, this means any fixed nuisance regressors not updated during fitting, including DC components and any nuisance regressors provided by the user. X_DC is always in the first few columns of X_base.
4.236484
4.05469
1.044835
idx_param_sing = {'Cholesky': np.arange(n_l), 'a1': n_l} # for simplified fitting idx_param_fitU = {'Cholesky': np.arange(n_l), 'a1': np.arange(n_l, n_l + n_V)} # for the likelihood function when we fit U (the shared covariance). idx_param_fitV = {'log_SNR2': np.arange(n_V - 1), 'c_space': n_V - 1, 'c_inten': n_V, 'c_both': np.arange(n_V - 1, n_V - 1 + n_smooth)} # for the likelihood function when we fit V (reflected by SNR of # each voxel) return idx_param_sing, idx_param_fitU, idx_param_fitV
def _build_index_param(self, n_l, n_V, n_smooth)
Build dictionaries to retrieve each parameter from the combined parameters.
5.054386
4.927182
1.025817
chol = np.linalg.cholesky(M) if M.ndim == 2: return np.sum(np.log(np.abs(np.diag(chol)))) else: return np.sum(np.log(np.abs(np.diagonal( chol, axis1=-2, axis2=-1))), axis=-1)
def _half_log_det(self, M)
Return log(|M|)*0.5. For positive definite matrix M of more than 2 dimensions, calculate this for the last two dimension and return a value corresponding to each element in the first few dimensions.
2.244925
2.114244
1.06181
logger.info('Transforming new data.') # Constructing the transition matrix and the variance of # innovation noise as prior for the latent variable X and X0 # in new data. n_C = beta.shape[0] n_T = Y.shape[0] weight = np.concatenate((beta, beta0), axis=0) T_X = np.diag(np.concatenate((rho_X, rho_X0))) Var_X = np.concatenate((sigma2_X / (1 - rho_X**2), sigma2_X0 / (1 - rho_X0**2))) Var_dX = np.concatenate((sigma2_X, sigma2_X0)) sigma2_e = sigma_e ** 2 scan_onsets = np.setdiff1d(scan_onsets, n_T) n_scan = scan_onsets.size X = [None] * scan_onsets.size X0 = [None] * scan_onsets.size total_log_p = 0 for scan, onset in enumerate(scan_onsets): # Forward step if scan == n_scan - 1: offset = n_T else: offset = scan_onsets[scan + 1] mu, mu_Gamma_inv, Gamma_inv, log_p_data, Lambda_0, \ Lambda_1, H, deltaY, deltaY_sigma2inv_rho_weightT = \ self._forward_step(Y[onset:offset, :], T_X, Var_X, Var_dX, rho_e, sigma2_e, weight) total_log_p += log_p_data # Backward step mu_hat, mu_Gamma_inv_hat, Gamma_inv_hat \ = self._backward_step( deltaY, deltaY_sigma2inv_rho_weightT, sigma2_e, weight, mu, mu_Gamma_inv, Gamma_inv, Lambda_0, Lambda_1, H) X[scan] = np.concatenate( [mu_t[None, :n_C] for mu_t in mu_hat]) X0[scan] = np.concatenate( [mu_t[None, n_C:] for mu_t in mu_hat]) X = np.concatenate(X) X0 = np.concatenate(X0) return X, X0, total_log_p
def _transform(self, Y, scan_onsets, beta, beta0, rho_e, sigma_e, rho_X, sigma2_X, rho_X0, sigma2_X0)
Given the data Y and the response amplitudes beta and beta0 estimated in the fit step, estimate the corresponding X and X0. It is done by a forward-backward algorithm. We assume X and X0 both are vector autoregressive (VAR) processes, to capture temporal smoothness. Their VAR parameters are estimated from training data at the fit stage.
2.88857
2.851804
1.012892
logger.info('Estimating cross-validated score for new data.') n_T = Y.shape[0] if design is not None: Y = Y - np.dot(design, beta) # The function works for both full model and null model. # If design matrix is not provided, the whole data is # used as input for _forward_step. If design matrix is provided, # residual after subtracting design * beta is fed to _forward_step T_X = np.diag(rho_X0) Var_X = sigma2_X0 / (1 - rho_X0**2) Var_dX = sigma2_X0 # Prior parmeters for X0: T_X is transitioning matrix, Var_X # is the marginal variance of the first time point. Var_dX is the # variance of the updating noise. sigma2_e = sigma_e ** 2 # variance of voxel-specific updating noise component scan_onsets = np.setdiff1d(scan_onsets, n_T).astype(int) n_scan = scan_onsets.size total_log_p = 0 for scan, onset in enumerate(scan_onsets): # Forward step if scan == n_scan - 1: offset = n_T else: offset = scan_onsets[scan + 1] _, _, _, log_p_data, _, _, _, _, _ = \ self._forward_step( Y[onset:offset, :], T_X, Var_X, Var_dX, rho_e, sigma2_e, beta0) total_log_p += log_p_data return total_log_p
def _score(self, Y, design, beta, scan_onsets, beta0, rho_e, sigma_e, rho_X0, sigma2_X0)
Given the data Y, and the spatial pattern beta0 of nuisance time series, return the cross-validated score of the data Y given all parameters of the subject estimated during the first step. It is assumed that the user has design matrix built for the data Y. Both beta and beta0 are posterior expectation estimated from training data with the estimated covariance matrix U and SNR serving as prior. We marginalize X0 instead of fitting it in this function because this function is for the purpose of evaluating model no new data. We should avoid doing any additional fitting when performing cross-validation. The hypothetic response to the task will be subtracted, and the unknown nuisance activity which contributes to the data through beta0 will be marginalized.
4.577852
4.407941
1.038547
if same_para: n_c = x.shape[1] x = np.reshape(x, x.size, order='F') rho, sigma2 = alg.AR_est_YW(x, 1) # We concatenate all the design matrix to estimate common AR(1) # parameters. This creates some bias because the end of one column # and the beginning of the next column of the design matrix are # treated as consecutive samples. rho = np.ones(n_c) * rho sigma2 = np.ones(n_c) * sigma2 else: rho = np.zeros(np.shape(x)[1]) sigma2 = np.zeros(np.shape(x)[1]) for c in np.arange(np.shape(x)[1]): rho[c], sigma2[c] = alg.AR_est_YW(x[:, c], 1) return rho, sigma2
def _est_AR1(self, x, same_para=False)
Estimate the AR(1) parameters of input x. Each column of x is assumed as independent from other columns, and each column is treated as an AR(1) process. If same_para is set as True, then all columns of x are concatenated and a single set of AR(1) parameters is estimated. Strictly speaking the breaking point between each concatenated column should be considered. But for long time series, this is ignored.
3.814788
3.441514
1.108462
n_T = len(Gamma_inv) # All the terms with hat before are parameters of posterior # distributions of X conditioned on data from all time points, # whereas the ones without hat calculated by _forward_step # are mean and covariance of posterior of X conditioned on # data up to the time point. Gamma_inv_hat = [None] * n_T mu_Gamma_inv_hat = [None] * n_T mu_hat = [None] * n_T mu_hat[-1] = mu[-1].copy() mu_Gamma_inv_hat[-1] = mu_Gamma_inv[-1].copy() Gamma_inv_hat[-1] = Gamma_inv[-1].copy() for t in np.arange(n_T - 2, -1, -1): tmp = np.linalg.solve(Gamma_inv_hat[t + 1] - Gamma_inv[t + 1] + Lambda_1, H) Gamma_inv_hat[t] = Gamma_inv[t] + Lambda_0 - np.dot(H.T, tmp) mu_Gamma_inv_hat[t] = mu_Gamma_inv[t] \ - deltaY_sigma2inv_rho_weightT[t, :] + np.dot( mu_Gamma_inv_hat[t + 1] - mu_Gamma_inv[t + 1] + np.dot(deltaY[t, :] / sigma2_e, weight.T), tmp) mu_hat[t] = np.linalg.solve(Gamma_inv_hat[t], mu_Gamma_inv_hat[t]) return mu_hat, mu_Gamma_inv_hat, Gamma_inv_hat
def _backward_step(self, deltaY, deltaY_sigma2inv_rho_weightT, sigma2_e, weight, mu, mu_Gamma_inv, Gamma_inv, Lambda_0, Lambda_1, H)
backward step for HMM, assuming both the hidden state and noise have 1-step dependence on the previous value.
2.754465
2.780325
0.990699
X = self._check_data_GBRSA(X, for_fit=False) scan_onsets = self._check_scan_onsets_GBRSA(scan_onsets, X) assert len(X) == self.n_subj_ ts = [None] * self.n_subj_ ts0 = [None] * self.n_subj_ log_p = [None] * self.n_subj_ for i, x in enumerate(X): if x is not None: s = scan_onsets[i] ts[i], ts0[i], log_p[i] = self._transform( Y=x, scan_onsets=s, beta=self.beta_[i], beta0=self.beta0_[i], rho_e=self.rho_[i], sigma_e=self.sigma_[i], rho_X=self._rho_design_[i], sigma2_X=self._sigma2_design_[i], rho_X0=self._rho_X0_[i], sigma2_X0=self._sigma2_X0_[i]) return ts, ts0
def transform(self, X, y=None, scan_onsets=None)
Use the model to estimate the time course of response to each condition (ts), and the time course unrelated to task (ts0) which is spread across the brain. This is equivalent to "decoding" the design matrix and nuisance regressors from a new dataset different from the training dataset on which fit() was applied. An AR(1) smooth prior is imposed on the decoded ts and ts0 with the AR(1) parameters learnt from the corresponding time courses in the training data. Parameters ---------- X : list of 2-D arrays. For each item, shape=[time_points, voxels] New fMRI data of the same subjects. The voxels should match those used in the fit() function. The size of the list should match the size of the list X fed to fit(), with each item in the list corresponding to data from the same subject in the X fed to fit(). If you do not need to transform some subjects' data, leave the entry corresponding to that subject as None. If data are z-scored when fitting the model, data should be z-scored as well when calling transform() y : not used (as it is unsupervised learning) scan_onsets : list of 1-D numpy arrays, Each array corresponds to the onsets of scans in the data X for the particular subject. If not provided, data will be assumed to be acquired in a continuous scan. Returns ------- ts : list of 2-D arrays. For each, shape = [time_points, condition] The estimated response to the cognitive dimensions (task dimensions) whose response amplitudes were estimated during the fit step. One item for each subject. If some subjects' data are not provided, None will be returned. ts0: list of 2-D array. For each, shape = [time_points, n_nureg] The estimated time courses spread across the brain, with the loading weights estimated during the fit step. One item for each subject. If some subjects' data are not provided, None will be returned.
2.993141
2.753137
1.087175
boundaries = np.flip(scipy.stats.expon.isf( np.linspace(0, 1, n_bin + 1), scale=scale), axis=0) bins = np.empty(n_bin) for i in np.arange(n_bin): bins[i] = utils.center_mass_exp( (boundaries[i], boundaries[i + 1]), scale=scale) return bins
def _bin_exp(self, n_bin, scale=1.0)
Calculate the bin locations to approximate exponential distribution. It breaks the cumulative probability of exponential distribution into n_bin equal bins, each covering 1 / n_bin probability. Then it calculates the center of mass in each bins and returns the centers of mass. So, it approximates the exponential distribution with n_bin of Delta function weighted by 1 / n_bin, at the locations of these centers of mass. Parameters: ----------- n_bin: int The number of bins to approximate the exponential distribution scale: float. The scale parameter of the exponential distribution, defined in the same way as scipy.stats. It does not influence the ratios between the bins, but just controls the spacing between the bins. So generally users should not change its default. Returns: -------- bins: numpy array of size [n_bin,] The centers of mass for each segment of the exponential distribution.
3.181073
3.119428
1.019762
if self.SNR_prior == 'unif': SNR_grids = np.linspace(0, 1, self.SNR_bins) SNR_weights = np.ones(self.SNR_bins) / (self.SNR_bins - 1) SNR_weights[0] = SNR_weights[0] / 2.0 SNR_weights[-1] = SNR_weights[-1] / 2.0 elif self.SNR_prior == 'lognorm': dist = scipy.stats.lognorm alphas = np.arange(np.mod(self.SNR_bins, 2), self.SNR_bins + 2, 2) / self.SNR_bins # The goal here is to divide the area under the pdf curve # to segments representing equal probabilities. bounds = dist.interval(alphas, (self.logS_range,)) bounds = np.unique(bounds) # bounds contain the boundaries which equally separate # the probability mass of the distribution SNR_grids = np.zeros(self.SNR_bins) for i in np.arange(self.SNR_bins): SNR_grids[i] = dist.expect( lambda x: x, args=(self.logS_range,), lb=bounds[i], ub=bounds[i + 1]) * self.SNR_bins # Center of mass of each segment between consecutive # bounds are set as the grids for SNR. SNR_weights = np.ones(self.SNR_bins) / self.SNR_bins elif self.SNR_prior == 'exp': SNR_grids = self._bin_exp(self.SNR_bins) SNR_weights = np.ones(self.SNR_bins) / self.SNR_bins else: SNR_grids = np.ones(1) SNR_weights = np.ones(1) SNR_weights = SNR_weights / np.sum(SNR_weights) return SNR_grids, SNR_weights
def _set_SNR_grids(self)
Set the grids and weights for SNR used in numerical integration of SNR parameters.
2.982758
2.924313
1.019986
rho_grids = np.arange(self.rho_bins) * 2 / self.rho_bins - 1 \ + 1 / self.rho_bins rho_weights = np.ones(self.rho_bins) / self.rho_bins return rho_grids, rho_weights
def _set_rho_grids(self)
Set the grids and weights for rho used in numerical integration of AR(1) parameters.
3.101884
2.79876
1.108306
half_log_det_X0TAX0 = np.reshape( np.repeat(self._half_log_det(X0TAX0)[None, :], self.SNR_bins, axis=0), n_grid) X0TAX0 = np.reshape( np.repeat(X0TAX0[None, :, :, :], self.SNR_bins, axis=0), (n_grid, n_X0, n_X0)) X0TAX0_i = np.reshape(np.repeat( X0TAX0_i[None, :, :, :], self.SNR_bins, axis=0), (n_grid, n_X0, n_X0)) s2XTAcorrX = np.reshape( SNR_grids[:, None, None, None]**2 * XTAcorrX, (n_grid, n_C, n_C)) YTAcorrY_diag = np.reshape(np.repeat( YTAcorrY_diag[None, :, :], self.SNR_bins, axis=0), (n_grid, n_V)) sXTAcorrY = np.reshape(SNR_grids[:, None, None, None] * XTAcorrY, (n_grid, n_C, n_V)) X0TAY = np.reshape(np.repeat(X0TAY[None, :, :, :], self.SNR_bins, axis=0), (n_grid, n_X0, n_V)) XTAX0 = np.reshape(np.repeat(XTAX0[None, :, :, :], self.SNR_bins, axis=0), (n_grid, n_C, n_X0)) return half_log_det_X0TAX0, X0TAX0, X0TAX0_i, s2XTAcorrX, \ YTAcorrY_diag, sXTAcorrY, X0TAY, XTAX0
def _matrix_flattened_grid(self, X0TAX0, X0TAX0_i, SNR_grids, XTAcorrX, YTAcorrY_diag, XTAcorrY, X0TAY, XTAX0, n_C, n_V, n_X0, n_grid)
We need to integrate parameters SNR and rho on 2-d discrete grids. This function generates matrices which have only one dimension for these two parameters, with each slice in that dimension corresponding to each combination of the discrete grids of SNR and discrete grids of rho.
1.601341
1.634348
0.979804
logger.info('Starting RSRM') # Check that the regularizer value is positive if 0.0 >= self.lam: raise ValueError("Gamma parameter should be positive.") # Check the number of subjects if len(X) <= 1: raise ValueError("There are not enough subjects in the input " "data to train the model.") # Check for input data sizes if X[0].shape[1] < self.features: raise ValueError( "There are not enough timepoints to train the model with " "{0:d} features.".format(self.features)) # Check if all subjects have same number of TRs for alignment number_trs = X[0].shape[1] number_subjects = len(X) for subject in range(number_subjects): assert_all_finite(X[subject]) if X[subject].shape[1] != number_trs: raise ValueError("Different number of alignment timepoints " "between subjects.") # Create a new random state self.random_state_ = np.random.RandomState(self.rand_seed) # Run RSRM self.w_, self.r_, self.s_ = self._rsrm(X) return self
def fit(self, X)
Compute the Robust Shared Response Model Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data of one subject.
4.041771
3.812639
1.060098
# Check if the model exist if hasattr(self, 'w_') is False: raise NotFittedError("The model fit has not been run yet.") # Check the number of subjects if len(X) != len(self.w_): raise ValueError("The number of subjects does not match the one" " in the model.") r = [None] * len(X) s = [None] * len(X) for subject in range(len(X)): if X[subject] is not None: r[subject], s[subject] = self._transform_new_data(X[subject], subject) return r, s
def transform(self, X)
Use the model to transform new data to Shared Response space Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints_i] Each element in the list contains the fMRI data of one subject. Returns ------- r : list of 2D arrays, element i has shape=[features_i, timepoints_i] Shared responses from input data (X) s : list of 2D arrays, element i has shape=[voxels_i, timepoints_i] Individual data obtained from fitting model to input data (X)
3.250835
2.840767
1.144351
S = np.zeros_like(X) R = None for i in range(self.n_iter): R = self.w_[subject].T.dot(X - S) S = self._shrink(X - self.w_[subject].dot(R), self.lam) return R, S
def _transform_new_data(self, X, subject)
Transform new data for a subjects by projecting to the shared subspace and computing the individual information. Parameters ---------- X : array, shape=[voxels, timepoints] The fMRI data of the subject. subject : int The subject id. Returns ------- R : array, shape=[features, timepoints] Shared response from input data (X) S : array, shape=[voxels, timepoints] Individual data obtained from fitting model to input data (X)
4.374014
4.463033
0.980054
# Check if the model exist if hasattr(self, 'w_') is False: raise NotFittedError("The model fit has not been run yet.") # Check the number of TRs in the subject if X.shape[1] != self.r_.shape[1]: raise ValueError("The number of timepoints(TRs) does not match the" "one in the model.") s = np.zeros_like(X) for i in range(self.n_iter): w = self._update_transform_subject(X, s, self.r_) s = self._shrink(X - w.dot(self.r_), self.lam) return w, s
def transform_subject(self, X)
Transform a new subject using the existing model Parameters ---------- X : 2D array, shape=[voxels, timepoints] The fMRI data of the new subject. Returns ------- w : 2D array, shape=[voxels, features] Orthogonal mapping `W_{new}` for new subject s : 2D array, shape=[voxels, timepoints] Individual term `S_{new}` for new subject
4.722648
4.079994
1.157513
subjs = len(X) voxels = [X[i].shape[0] for i in range(subjs)] TRs = X[0].shape[1] features = self.features # Initialization W = self._init_transforms(subjs, voxels, features, self.random_state_) S = self._init_individual(subjs, voxels, TRs) R = self._update_shared_response(X, S, W, features) if logger.isEnabledFor(logging.INFO): objective = self._objective_function(X, W, R, S, self.lam) logger.info('Objective function %f' % objective) # Main loop for i in range(self.n_iter): W = self._update_transforms(X, S, R) S = self._update_individual(X, W, R, self.lam) R = self._update_shared_response(X, S, W, features) # Print objective function every iteration if logger.isEnabledFor(logging.INFO): objective = self._objective_function(X, W, R, S, self.lam) logger.info('Objective function %f' % objective) return W, R, S
def _rsrm(self, X)
Block-Coordinate Descent algorithm for fitting RSRM. Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data for alignment of one subject. Returns ------- W : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. R : array, shape=[features, timepoints] The shared response. S : list of array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject.
2.891613
2.388194
1.210795
# Init the Random seed generator np.random.seed(self.rand_seed) # Draw a random W for each subject W = [random_state.random_sample((voxels[i], features)) for i in range(subjs)] # Make it orthogonal it with QR decomposition for i in range(subjs): W[i], _ = np.linalg.qr(W[i]) return W
def _init_transforms(self, subjs, voxels, features, random_state)
Initialize the mappings (Wi) with random orthogonal matrices. Parameters ---------- subjs : int The number of subjects. voxels : list of int A list with the number of voxels per subject. features : int The number of features in the model. random_state : `RandomState` A random state to draw the mappings. Returns ------- W : list of array, element i has shape=[voxels_i, features] The initialized orthogonal transforms (mappings) :math:`W_i` for each subject. Note ---- Not thread safe.
5.602273
4.219152
1.32782
subjs = len(X) func = .0 for i in range(subjs): func += 0.5 * np.sum((X[i] - W[i].dot(R) - S[i])**2) \ + gamma * np.sum(np.abs(S[i])) return func
def _objective_function(X, W, R, S, gamma)
Evaluate the objective function. .. math:: \\sum_{i=1}^{N} 1/2 \\| X_i - W_i R - S_i \\|_F^2 .. math:: + /\\gamma * \\|S_i\\|_1 Parameters ---------- X : list of array, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data for alignment of one subject. W : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. R : array, shape=[features, timepoints] The shared response. S : list of array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject. gamma : float, default: 1.0 Regularization parameter for the sparseness of the individual components. Returns ------- func : float The RSRM objective function evaluated on the parameters to this function.
3.461295
3.17733
1.089372
subjs = len(X) S = [] for i in range(subjs): S.append(RSRM._shrink(X[i] - W[i].dot(R), gamma)) return S
def _update_individual(X, W, R, gamma)
Update the individual components `S_i`. Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data for alignment of one subject. W : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. R : array, shape=[features, timepoints] The shared response. gamma : float, default: 1.0 Regularization parameter for the sparseness of the individual components. Returns ------- S : list of array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject.
6.909562
6.814937
1.013885
return [np.zeros((voxels[i], TRs)) for i in range(subjs)]
def _init_individual(subjs, voxels, TRs)
Initializes the individual components `S_i` to empty (all zeros). Parameters ---------- subjs : int The number of subjects. voxels : list of int A list with the number of voxels per subject. TRs : int The number of timepoints in the data. Returns ------- S : list of 2D array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject initialized to zero.
5.644478
4.345129
1.299036
subjs = len(X) TRs = X[0].shape[1] R = np.zeros((features, TRs)) # Project the subject data with the individual component removed into # the shared subspace and average over all subjects. for i in range(subjs): R += W[i].T.dot(X[i]-S[i]) R /= subjs return R
def _update_shared_response(X, S, W, features)
Update the shared response `R`. Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data for alignment of one subject. S : list of array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject. W : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. features : int The number of features in the model. Returns ------- R : array, shape=[features, timepoints] The updated shared response.
6.525561
5.181515
1.259392
A = Xi.dot(R.T) A -= Si.dot(R.T) # Solve the Procrustes problem U, _, V = np.linalg.svd(A, full_matrices=False) return U.dot(V)
def _update_transform_subject(Xi, Si, R)
Updates the mappings `W_i` for one subject. Parameters ---------- Xi : array, shape=[voxels, timepoints] The fMRI data :math:`X_i` for aligning the subject. Si : array, shape=[voxels, timepoints] The individual component :math:`S_i` for the subject. R : array, shape=[features, timepoints] The shared response. Returns ------- Wi : array, shape=[voxels, features] The orthogonal transform (mapping) :math:`W_i` for the subject.
3.485839
4.19272
0.831403
subjs = len(X) W = [] for i in range(subjs): W.append(RSRM._update_transform_subject(X[i], S[i], R)) return W
def _update_transforms(X, S, R)
Updates the mappings `W_i` for each subject. Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data for alignment of one subject.ß S : list of array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject. R : array, shape=[features, timepoints] The shared response. Returns ------- W : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject.
5.833858
5.593774
1.04292
pos = v > gamma neg = v < -gamma v[pos] -= gamma v[neg] += gamma v[np.logical_and(~pos, ~neg)] = .0 return v
def _shrink(v, gamma)
Soft-shrinkage of an array with parameter gamma. Parameters ---------- v : array Array containing the values to be applied to the shrinkage operator gamma : float Shrinkage parameter. Returns ------- v : array The same input array after the shrinkage operator was applied.
3.573369
4.209385
0.848905
import matplotlib.pyplot as plt import math plt.figure() subjects = len(cm) root_subjects = math.sqrt(subjects) cols = math.ceil(root_subjects) rows = math.ceil(subjects/cols) classes = cm[0].shape[0] for subject in range(subjects): plt.subplot(rows, cols, subject+1) plt.imshow(cm[subject], interpolation='nearest', cmap=plt.cm.bone) plt.xticks(np.arange(classes), range(1, classes+1)) plt.yticks(np.arange(classes), range(1, classes+1)) cbar = plt.colorbar(ticks=[0.0, 1.0], shrink=0.6) cbar.set_clim(0.0, 1.0) plt.xlabel("Predicted") plt.ylabel("True label") plt.title("{0:d}".format(subject + 1)) plt.suptitle(title) plt.tight_layout() plt.show()
def plot_confusion_matrix(cm, title="Confusion Matrix")
Plots a confusion matrix for each subject
2.122406
2.025687
1.047746
image_data = image.get_data() if image_data.shape[:3] != mask.shape: raise ValueError("Image data and mask have different shapes.") if data_type is not None: cast_data = image_data.astype(data_type) else: cast_data = image_data return cast_data[mask]
def mask_image(image: SpatialImage, mask: np.ndarray, data_type: type = None ) -> np.ndarray
Mask image after optionally casting its type. Parameters ---------- image Image to mask. Can include time as the last dimension. mask Mask to apply. Must have the same shape as the image data. data_type Type to cast image to. Returns ------- np.ndarray Masked image. Raises ------ ValueError Image data and masks have different shapes.
2.580355
2.721519
0.94813
for image in images: yield [mask_image(image, mask, image_type) for mask in masks]
def multimask_images(images: Iterable[SpatialImage], masks: Sequence[np.ndarray], image_type: type = None ) -> Iterable[Sequence[np.ndarray]]
Mask images with multiple masks. Parameters ---------- images: Images to mask. masks: Masks to apply. image_type: Type to cast images to. Yields ------ Sequence[np.ndarray] For each mask, a masked image.
3.522394
4.730463
0.744619
for images in multimask_images(images, (mask,), image_type): yield images[0]
def mask_images(images: Iterable[SpatialImage], mask: np.ndarray, image_type: type = None) -> Iterable[np.ndarray]
Mask images. Parameters ---------- images: Images to mask. mask: Mask to apply. image_type: Type to cast images to. Yields ------ np.ndarray Masked image.
10.208649
13.025961
0.783716
images_iterator = iter(masked_images) first_image = next(images_iterator) first_image_shape = first_image.T.shape result = np.empty((first_image_shape[0], first_image_shape[1], n_subjects)) for n_images, image in enumerate(itertools.chain([first_image], images_iterator)): image = image.T if image.shape != first_image_shape: raise ValueError("Image {} has different shape from first " "image: {} != {}".format(n_images, image.shape, first_image_shape)) result[:, :, n_images] = image n_images += 1 if n_images != n_subjects: raise ValueError("n_subjects != number of images: {} != {}" .format(n_subjects, n_images)) return result.view(cls)
def from_masked_images(cls: Type[T], masked_images: Iterable[np.ndarray], n_subjects: int) -> T
Create a new instance of MaskedMultiSubjecData from masked images. Parameters ---------- masked_images : iterator Images from multiple subjects to stack along 3rd dimension n_subjects : int Number of subjects; must match the number of images Returns ------- T A new instance of MaskedMultiSubjectData Raises ------ ValueError Images have different shapes. The number of images differs from n_subjects.
2.237546
2.440391
0.91688
condition_idxs, epoch_idxs, _ = np.where(self) _, unique_epoch_idxs = np.unique(epoch_idxs, return_index=True) return condition_idxs[unique_epoch_idxs]
def extract_labels(self) -> np.ndarray
Extract condition labels. Returns ------- np.ndarray The condition label of each epoch.
5.093973
4.101325
1.242031
w = [] subjects = len(data) voxels = np.empty(subjects, dtype=int) # Set Wi to a random orthogonal voxels by features matrix for subject in range(subjects): if data[subject] is not None: voxels[subject] = data[subject].shape[0] rnd_matrix = random_states[subject].random_sample(( voxels[subject], features)) q, r = np.linalg.qr(rnd_matrix) w.append(q) else: voxels[subject] = 0 w.append(None) voxels = comm.allreduce(voxels, op=MPI.SUM) return w, voxels
def _init_w_transforms(data, features, random_states, comm=MPI.COMM_SELF)
Initialize the mappings (Wi) for the SRM with random orthogonal matrices. Parameters ---------- data : list of 2D arrays, element i has shape=[voxels_i, samples] Each element in the list contains the fMRI data of one subject. features : int The number of features in the model. random_states : list of `RandomState`s One `RandomState` instance per subject. comm : mpi4py.MPI.Intracomm The MPI communicator containing the data Returns ------- w : list of array, element i has shape=[voxels_i, features] The initialized orthogonal transforms (mappings) :math:`W_i` for each subject. voxels : list of int A list with the number of voxels per subject. Note ---- This function assumes that the numpy random number generator was initialized. Not thread safe.
3.641288
2.986872
1.219097
logger.info('Starting Probabilistic SRM') # Check the number of subjects if len(X) <= 1: raise ValueError("There are not enough subjects " "({0:d}) to train the model.".format(len(X))) # Check for input data sizes number_subjects = len(X) number_subjects_vec = self.comm.allgather(number_subjects) for rank in range(self.comm.Get_size()): if number_subjects_vec[rank] != number_subjects: raise ValueError( "Not all ranks have same number of subjects") # Collect size information shape0 = np.zeros((number_subjects,), dtype=np.int) shape1 = np.zeros((number_subjects,), dtype=np.int) for subject in range(number_subjects): if X[subject] is not None: assert_all_finite(X[subject]) shape0[subject] = X[subject].shape[0] shape1[subject] = X[subject].shape[1] shape0 = self.comm.allreduce(shape0, op=MPI.SUM) shape1 = self.comm.allreduce(shape1, op=MPI.SUM) # Check if all subjects have same number of TRs number_trs = np.min(shape1) for subject in range(number_subjects): if shape1[subject] < self.features: raise ValueError( "There are not enough samples to train the model with " "{0:d} features.".format(self.features)) if shape1[subject] != number_trs: raise ValueError("Different number of samples between subjects" ".") # Run SRM self.sigma_s_, self.w_, self.mu_, self.rho2_, self.s_ = self._srm(X) return self
def fit(self, X, y=None)
Compute the probabilistic Shared Response Model Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, samples] Each element in the list contains the fMRI data of one subject. y : not used
3.114107
2.987892
1.042242
# Check if the model exist if hasattr(self, 'w_') is False: raise NotFittedError("The model fit has not been run yet.") # Check the number of subjects if len(X) != len(self.w_): raise ValueError("The number of subjects does not match the one" " in the model.") s = [None] * len(X) for subject in range(len(X)): if X[subject] is not None: s[subject] = self.w_[subject].T.dot(X[subject]) return s
def transform(self, X, y=None)
Use the model to transform matrix to Shared Response space Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, samples_i] Each element in the list contains the fMRI data of one subject note that number of voxels and samples can vary across subjects y : not used (as it is unsupervised learning) Returns ------- s : list of 2D arrays, element i has shape=[features_i, samples_i] Shared responses from input data (X)
3.186861
2.809292
1.1344
A = Xi.dot(S.T) # Solve the Procrustes problem U, _, V = np.linalg.svd(A, full_matrices=False) return U.dot(V)
def _update_transform_subject(Xi, S)
Updates the mappings `W_i` for one subject. Parameters ---------- Xi : array, shape=[voxels, timepoints] The fMRI data :math:`X_i` for aligning the subject. S : array, shape=[features, timepoints] The shared response. Returns ------- Wi : array, shape=[voxels, features] The orthogonal transform (mapping) :math:`W_i` for the subject.
3.751317
4.576279
0.819731
# Check if the model exist if hasattr(self, 'w_') is False: raise NotFittedError("The model fit has not been run yet.") # Check the number of TRs in the subject if X.shape[1] != self.s_.shape[1]: raise ValueError("The number of timepoints(TRs) does not match the" "one in the model.") w = self._update_transform_subject(X, self.s_) return w
def transform_subject(self, X)
Transform a new subject using the existing model. The subject is assumed to have recieved equivalent stimulation Parameters ---------- X : 2D array, shape=[voxels, timepoints] The fMRI data of the new subject. Returns ------- w : 2D array, shape=[voxels, features] Orthogonal mapping `W_{new}` for new subject
5.400873
4.850527
1.113461
subjects = len(data) self.random_state_ = np.random.RandomState(self.rand_seed) random_states = [ np.random.RandomState(self.random_state_.randint(2 ** 32)) for i in range(len(data))] # Initialization step: initialize the outputs with initial values, # voxels with the number of voxels in each subject. w, _ = _init_w_transforms(data, self.features, random_states) shared_response = self._compute_shared_response(data, w) if logger.isEnabledFor(logging.INFO): # Calculate the current objective function value objective = self._objective_function(data, w, shared_response) logger.info('Objective function %f' % objective) # Main loop of the algorithm for iteration in range(self.n_iter): logger.info('Iteration %d' % (iteration + 1)) # Update each subject's mapping transform W_i: for subject in range(subjects): a_subject = data[subject].dot(shared_response.T) perturbation = np.zeros(a_subject.shape) np.fill_diagonal(perturbation, 0.001) u_subject, _, v_subject = np.linalg.svd( a_subject + perturbation, full_matrices=False) w[subject] = u_subject.dot(v_subject) # Update the shared response: shared_response = self._compute_shared_response(data, w) if logger.isEnabledFor(logging.INFO): # Calculate the current objective function value objective = self._objective_function(data, w, shared_response) logger.info('Objective function %f' % objective) return w, shared_response
def _srm(self, data)
Expectation-Maximization algorithm for fitting the probabilistic SRM. Parameters ---------- data : list of 2D arrays, element i has shape=[voxels_i, samples] Each element in the list contains the fMRI data of one subject. Returns ------- w : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. s : array, shape=[features, samples] The shared response.
3.125122
2.763078
1.131029
X = copy.deepcopy(X) if type(X) is not list: X = check_array(X) X = [X] n_train = len(X) for i in range(n_train): X[i] = X[i].T self.classes_ = np.arange(self.n_events) n_dim = X[0].shape[0] for i in range(n_train): assert (X[i].shape[0] == n_dim) # Double-check that data is z-scored in time for i in range(n_train): X[i] = stats.zscore(X[i], axis=1, ddof=1) # Initialize variables for fitting log_gamma = [] for i in range(n_train): log_gamma.append(np.zeros((X[i].shape[1], self.n_events))) step = 1 best_ll = float("-inf") self.ll_ = np.empty((0, n_train)) while step <= self.n_iter: iteration_var = self.step_var(step) # Based on the current segmentation, compute the mean pattern # for each event seg_prob = [np.exp(lg) / np.sum(np.exp(lg), axis=0) for lg in log_gamma] mean_pat = np.empty((n_train, n_dim, self.n_events)) for i in range(n_train): mean_pat[i, :, :] = X[i].dot(seg_prob[i]) mean_pat = np.mean(mean_pat, axis=0) # Based on the current mean patterns, compute the event # segmentation self.ll_ = np.append(self.ll_, np.empty((1, n_train)), axis=0) for i in range(n_train): logprob = self._logprob_obs(X[i], mean_pat, iteration_var) log_gamma[i], self.ll_[-1, i] = self._forward_backward(logprob) # If log-likelihood has started decreasing, undo last step and stop if np.mean(self.ll_[-1, :]) < best_ll: self.ll_ = self.ll_[:-1, :] break self.segments_ = [np.exp(lg) for lg in log_gamma] self.event_var_ = iteration_var self.event_pat_ = mean_pat best_ll = np.mean(self.ll_[-1, :]) logger.debug("Fitting step %d, LL=%f", step, best_ll) step += 1 return self
def fit(self, X, y=None)
Learn a segmentation on training data Fits event patterns and a segmentation to training data. After running this function, the learned event patterns can be used to segment other datasets using find_events Parameters ---------- X: time by voxel ndarray, or a list of such ndarrays fMRI data to be segmented. If a list is given, then all datasets are segmented simultaneously with the same event patterns y: not used (added to comply with BaseEstimator definition) Returns ------- self: the EventSegment object
2.871782
2.816127
1.019763
n_vox = data.shape[0] t = data.shape[1] # z-score both data and mean patterns in space, so that Gaussians # are measuring Pearson correlations and are insensitive to overall # activity changes data_z = stats.zscore(data, axis=0, ddof=1) mean_pat_z = stats.zscore(mean_pat, axis=0, ddof=1) logprob = np.empty((t, self.n_events)) if type(var) is not np.ndarray: var = var * np.ones(self.n_events) for k in range(self.n_events): logprob[:, k] = -0.5 * n_vox * np.log( 2 * np.pi * var[k]) - 0.5 * np.sum( (data_z.T - mean_pat_z[:, k]).T ** 2, axis=0) / var[k] logprob /= n_vox return logprob
def _logprob_obs(self, data, mean_pat, var)
Log probability of observing each timepoint under each event model Computes the log probability of each observed timepoint being generated by the Gaussian distribution for each event pattern Parameters ---------- data: voxel by time ndarray fMRI data on which to compute log probabilities mean_pat: voxel by event ndarray Centers of the Gaussians for each event var: float or 1D array of length equal to the number of events Variance of the event Gaussians. If scalar, all events are assumed to have the same variance Returns ------- logprob : time by event ndarray Log probability of each timepoint under each event Gaussian
3.178005
2.800354
1.134858
logprob = copy.copy(logprob) t = logprob.shape[0] logprob = np.hstack((logprob, float("-inf") * np.ones((t, 1)))) # Initialize variables log_scale = np.zeros(t) log_alpha = np.zeros((t, self.n_events + 1)) log_beta = np.zeros((t, self.n_events + 1)) # Set up transition matrix, with final sink state self.p_start = np.zeros(self.n_events + 1) self.p_end = np.zeros(self.n_events + 1) self.P = np.zeros((self.n_events + 1, self.n_events + 1)) label_ind = np.unique(self.event_chains, return_inverse=True)[1] n_chains = np.max(label_ind) + 1 # For each chain of events, link them together and then to sink state for c in range(n_chains): chain_ind = np.nonzero(label_ind == c)[0] self.p_start[chain_ind[0]] = 1 / n_chains self.p_end[chain_ind[-1]] = 1 / n_chains p_trans = (len(chain_ind) - 1) / t if p_trans >= 1: raise ValueError('Too few timepoints') for i in range(len(chain_ind)): self.P[chain_ind[i], chain_ind[i]] = 1 - p_trans if i < len(chain_ind) - 1: self.P[chain_ind[i], chain_ind[i+1]] = p_trans else: self.P[chain_ind[i], -1] = p_trans self.P[-1, -1] = 1 # Forward pass for i in range(t): if i == 0: log_alpha[0, :] = self._log(self.p_start) + logprob[0, :] else: log_alpha[i, :] = self._log(np.exp(log_alpha[i - 1, :]) .dot(self.P)) + logprob[i, :] log_scale[i] = np.logaddexp.reduce(log_alpha[i, :]) log_alpha[i] -= log_scale[i] # Backward pass log_beta[-1, :] = self._log(self.p_end) - log_scale[-1] for i in reversed(range(t - 1)): obs_weighted = log_beta[i + 1, :] + logprob[i + 1, :] offset = np.max(obs_weighted) log_beta[i, :] = offset + self._log( np.exp(obs_weighted - offset).dot(self.P.T)) - log_scale[i] # Combine and normalize log_gamma = log_alpha + log_beta log_gamma -= np.logaddexp.reduce(log_gamma, axis=1, keepdims=True) ll = np.sum(log_scale[:(t - 1)]) + np.logaddexp.reduce( log_alpha[-1, :] + log_scale[-1] + self._log(self.p_end)) log_gamma = log_gamma[:, :-1] return log_gamma, ll
def _forward_backward(self, logprob)
Runs forward-backward algorithm on observation log probs Given the log probability of each timepoint being generated by each event, run the HMM forward-backward algorithm to find the probability that each timepoint belongs to each event (based on the transition priors in p_start, p_end, and P) See https://en.wikipedia.org/wiki/Forward-backward_algorithm for mathematical details Parameters ---------- logprob : time by event ndarray Log probability of each timepoint under each event Gaussian Returns ------- log_gamma : time by event ndarray Log probability of each timepoint belonging to each event ll : float Log-likelihood of fit
2.225715
2.12108
1.049331
xshape = x.shape _x = x.flatten() y = utils.masked_log(_x) return y.reshape(xshape)
def _log(self, x)
Modified version of np.log that manually sets values <=0 to -inf Parameters ---------- x: ndarray of floats Input to the log function Returns ------- log_ma: ndarray of floats log of x, with x<=0 values replaced with -inf
7.410101
7.695527
0.96291
if event_pat.shape[1] != self.n_events: raise ValueError(("Number of columns of event_pat must match " "number of events")) self.event_pat_ = event_pat.copy()
def set_event_patterns(self, event_pat)
Set HMM event patterns manually Rather than fitting the event patterns automatically using fit(), this function allows them to be set explicitly. They can then be used to find corresponding events in a new dataset, using find_events(). Parameters ---------- event_pat: voxel by event ndarray
3.741298
4.02105
0.930428
if var is None: if not hasattr(self, 'event_var_'): raise NotFittedError(("Event variance must be provided, if " "not previously set by fit()")) else: var = self.event_var_ if not hasattr(self, 'event_pat_'): raise NotFittedError(("The event patterns must first be set " "by fit() or set_event_patterns()")) if scramble: mean_pat = self.event_pat_[:, np.random.permutation(self.n_events)] else: mean_pat = self.event_pat_ logprob = self._logprob_obs(testing_data.T, mean_pat, var) lg, test_ll = self._forward_backward(logprob) segments = np.exp(lg) return segments, test_ll
def find_events(self, testing_data, var=None, scramble=False)
Applies learned event segmentation to new testing dataset After fitting an event segmentation using fit() or setting event patterns directly using set_event_patterns(), this function finds the same sequence of event patterns in a new testing dataset. Parameters ---------- testing_data: timepoint by voxel ndarray fMRI data to segment based on previously-learned event patterns var: float or 1D ndarray of length equal to the number of events default: uses variance that maximized training log-likelihood Variance of the event Gaussians. If scalar, all events are assumed to have the same variance. If fit() has not previously been run, this must be specifed (cannot be None). scramble: bool : default False If true, the order of the learned events are shuffled before fitting, to give a null distribution Returns ------- segments : time by event ndarray The resulting soft segmentation. segments[t,e] = probability that timepoint t is in event e test_ll : float Log-likelihood of model fit
3.954244
3.125794
1.265037
check_is_fitted(self, ["event_pat_", "event_var_"]) X = check_array(X) segments, test_ll = self.find_events(X) return np.argmax(segments, axis=1)
def predict(self, X)
Applies learned event segmentation to new testing dataset Alternative function for segmenting a new dataset after using fit() to learn a sequence of events, to comply with the sklearn Classifier interface Parameters ---------- X: timepoint by voxel ndarray fMRI data to segment based on previously-learned event patterns Returns ------- Event label for each timepoint
6.450968
8.547134
0.754752
Dz = stats.zscore(D, axis=1, ddof=1) ev_var = np.empty(event_pat.shape[1]) for e in range(event_pat.shape[1]): # Only compute variances for weights > 0.1% of max weight nz = weights[:, e] > np.max(weights[:, e])/1000 sumsq = np.dot(weights[nz, e], np.sum(np.square(Dz[nz, :] - event_pat[:, e]), axis=1)) ev_var[e] = sumsq/(np.sum(weights[nz, e]) - np.sum(np.square(weights[nz, e])) / np.sum(weights[nz, e])) ev_var = ev_var / D.shape[1] return ev_var
def calc_weighted_event_var(self, D, weights, event_pat)
Computes normalized weighted variance around event pattern Utility function for computing variance in a training set of weighted event examples. For each event, the sum of squared differences for all timepoints from the event pattern is computed, and then the weights specify how much each of these differences contributes to the variance (normalized by the number of voxels). Parameters ---------- D : timepoint by voxel ndarray fMRI data for which to compute event variances weights : timepoint by event ndarray specifies relative weights of timepoints for each event event_pat : voxel by event ndarray mean event patterns to compute variance around Returns ------- ev_var : ndarray of variances for each event
3.085838
2.874753
1.073427
lg, test_ll = self._forward_backward(np.zeros((t, self.n_events))) segments = np.exp(lg) return segments, test_ll
def model_prior(self, t)
Returns the prior probability of the HMM Runs forward-backward without any data, showing the prior distribution of the model (for comparison with a posterior). Parameters ---------- t: int Number of timepoints Returns ------- segments : time by event ndarray segments[t,e] = prior probability that timepoint t is in event e test_ll : float Log-likelihood of model (data-independent term)
13.940866
6.086598
2.29042
try: return _resolve_value(safe_chain_getattr(obj, attr)) except AttributeError: return value
def chain_getattr(obj, attr, value=None)
Get chain attribute for an object.
6.212589
6.629694
0.937085
if split is None: sl = 0 join = False else: sl = len(split) join = True result = [] rl = 0 for element in iterable: element = prefix + element + postfix el = len(element) if len(result) > 0: el += sl rl += el if rl <= limit: result.append(element) else: break if join: result = split.join(result) return result
def trim_iterable(iterable, limit, *, split=None, prefix='', postfix='')
trim the list to make total length no more than limit.If split specified,a string is return. :return:
2.907312
2.990264
0.972259