id
int64
0
25.6k
text
stringlengths
0
4.59k
15,100
can be efficiently implemented using dedicated computing hardware such as graphical processor units (gpusand other parallel computing architecture note also that many matrix computations that run in quadratic time can be replaced with linear-time componentwise multiplication specificallymultiplication of vector with diagonal matrix is equivalent to componentwise multiplicationa = |{zb diag(aconsequentlywe can write dl- dl- > dl asdl- (zl- > dl we now summarize the back-propagation algorithm for the computation of typical /th in the following algorithmlines to are the feed-forward part of the algorithmand lines to are the back-propagation part of the algorithm algorithm computing the gradient of typical (thl inputtraining example (xy)weight matrices and bias vectors {wl bl } = =thl activation functions {sl } = outputthe derivatives with respect to all weight matrices and bias vectors for do /feed-forward zl wl al- bl al sl (zl dl zl for do dl bl dl > - wl dl- (zl- saturation /arbitrary assignment needed to finish the loop /back-propagation > dl return and for all and the value (xal (if neededl note that for the gradient of (thto exist at every pointwe need the activation functions to be differentiable everywhere this is the casefor examplefor the logistic activation function in figure it is not the case for the relu functionwhich is differentiable everywhereexcept at howeverin practicethe kink of the relu function at is unlikely to trip the back-propagation algorithmbecause rounding errors and the finiteprecision computer arithmetic make it extremely unlikely that we will need to evaluate the relu at precisely this is the reason why in theorem we merely required that (this almost-everywhere differentiable in spite of its kink at the originthe relu has an important advantage over the logistic function while the derivative of the logistic function decays exponentially fast to zero as we move away from the origina phenomenon referred to as saturationthe derivative of the relu function is always unity for positive thusfor large positive zthe derivative of the logistic function does not carry any useful informationbut the derivative of the relu can help guide gradient optimization algorithm the situation for the heaviside function in figure is even worsebecause its derivative is completely noninformative for any in this respectthe lack of saturation of the relu function for makes it desirable activation function for training network via back-propagation
15,101
finallynote that to obtain the gradient ` /th of the training losswe simply need to loop algorithm over all the training examplesas follows algorithm computing the gradient of the training loss inputtraining set {(xi yi )}ni= weight matrices and bias vectors {wl bl } = =thactivation functions {sl } = outputthe gradient of the training loss for do /loop over all training examples ol run algorithm with input (xi yi to compute bl = ci return ni= and ni= for all wl bl example (squared-error and cross-entropy lossthe back-propagation algorithm requires formula for dl in line in particularto execute line we need to specify both loss function and an sl that defines the output layerg( thal sl (zl for instancein the multi-logit classification of inputs into pl categories labeled (pl )the output layer is defined via the softmax function exp(zl sl zl pl = exp(zl, in other wordsg( this probability vector such that its ( + )-st component gy+ ( thg( thxis the estimate or prediction of the true conditional probability ( xcombining the softmax output with the cross-entropy lossas was done in ( )yieldslossf ( ) ( thx)ln ( thxln gy+ ( thp pl -zy+ ln = exp(zk hencewe obtain the vector dl with components ( pl pl dl, zk -zy+ ln = exp(zk gk ( th { note that we can remove node from the final layer of the multi-logit networkbecause ( th(which corresponds to the classcan be eliminatedusing the fact pl that ( th = gk ( thfor numerical comparisonsee exercise as another examplein nonlinear multi-output regression (see example )the output function sl is typically of the form ( )so that sl / diag( ( ) ( pl )combining the output ( thsl (zl with the squared-error loss yieldsloss(yg( th)ky ( th) pl ( ( th)) = henceline in algorithm simplifies todl sl (zl ( thy
15,102
methods for training neural networks have been studied for long timeyet it is only recently that there have been sufficient computational resources to train them effectively the training of neural networks requires minimization of training loss` (th) ni= ci (th)which is typically difficult high-dimensional optimization problem with multiple local minima we next consider number of simple training methods in this sectionthe vectors dt and use the notation of section and should not be confused with the derivative and the prediction function grespectively steepest descent if we can compute the gradient of ` ( (th)via back-propagationthen we can apply the steepest descent algorithmwhich reads as follows starting from guess th we iterate the following step until convergencetht+ tht at ut learning rate ( where ut :(tht and at is the learning rate th observe thatrather than operating directly on the weights and biaseswe operate inpl stead on th :{wl bl } = - column vector of length = (pl- pl pl that stores all the weight and bias parameters the advantage of organizing the computations in this way is that we can easily compute the learning rate at for examplevia the barzilai-borwein formula in ( algorithm training via steepest descent inputtraining set {(xi yi )}ni= initial weight matrices and bias vectors {wl bl } = =th activation functions {sl } = outputthe parameters of the trained learner ut- /initialization while stopping condition is not met do compute the gradient ut (tht using algorithm th ut ut- if then /check if hessian is positive-definite /barzilai-borwein else - xa /failing positivitydo something heuristic - ut tht+ tht - + return tht as the minimizer of the training loss typicallywe initialize the algorithm with small random values for th while being careful to avoid saturating the activation function for examplein the case of the relu
15,103
activation functionwe will use small positive values to ensure that its derivative is not zero zero derivative of the activation function prevents the propagation of information useful for computing good search direction recall that computation of the gradient of the training loss via algorithm requires averaging over all training examples when the size of the training set tn is too largecomputation of the gradient `tn /th via algorithm may be too costly in such caseswe may employ the stochastic gradient descent algorithm in this algorithmwe view the training loss as an expectation that can be approximated via monte carlo sampling in particularif is random variable with distribution [ / for nthen we can write loss(yk (xk th) loss(yk (xk th)` (th) = stochastic gradient descent we can thus approximate ` (th)via monte carlo estimator using iid copies of kn ` ( (th):loss(yki (xki th) = the iid monte carlo sample kn is called minibatch (see also exercise typicallyn so that the probability of observing ties in minibatch of size is negligible finallynote that if the learning rate of the stochastic gradient descent algorithm satisfies the conditions in ( )then the stochastic gradient descent algorithm is simply version of the stochastic approximation algorithm levenberg-marquardt method since neural network with squared-error loss is special type of nonlinear regression modelit is possible to train it using classical nonlinear least-squares minimization methodssuch as the levenberg-marquardt algorithm for simplicity of notationsuppose that the output of the net for an input is scalar (xfor given input parameter th of dimension dim(th)the levenberg-marquardt algorithm requires computation of the following vector of outputsg( th:[ ( th) (xn th)]as well as the matrix of jacobigof at th to compute these quantitieswe can again use the back-propagation algorithm as follows algorithm output for training via levenberg-marquardt inputtraining set {(xi yi )}ni= parameter th outputvector ( thand matrix of jacobi for use in algorithm for do /loop over all training examples run algorithm with input (xi yi (using in line to compute (xi thand (xthi thg( th[ ( th) (xn th)] ( thi (xn th th th return ( thand minibatch
15,104
the levenberg-marquardt algorithm is not suitable for networks with large number of parametersbecause the cost of the matrix computations becomes prohibitive for instanceobtaining the levenberg-marquardt search direction in ( usually incurs an ( cost in additionthe levenberg-marquardt algorithm is applicable only when we wish to train the network using the squared-error loss both of these shortcomings are mitigated to an extent with the quasi-newton or adaptive gradient methods described next limited-memory bfgs method all the methods discussed so far have been first-order optimization methodsthat ismetht (tht at the current (and/or immediate pastcanods that only use the gradient vector ut :th didate solution tht in trying to design more efficient second-order optimization methodwe may be tempted to use newton' method with search direction- - ut quasi-newton method where ht is the matrix of second-order partial derivatives of ` ( (th)at tht there are two problems with this approach firstwhile the computation of ut via algorithm typically costs ( )the computation of ht costs ( secondeven if we have somehow computed ht very fastcomputing the search direction - ut still incurs an ( cost both of these considerations make newton' method impractical for large insteada practical alternative is to use quasi-newton methodin which we directly aim to approximate - via matrix ct that satisfies the secant conditionct where dt :tht tht- and :ut ut- an ingenious formula that generates suitable sequence of approximating matrices {ct (each satisfying the secant conditionis the bfgs updating formula ( )which can be written as the recursion (see exercise )( ct ut > ct- ut > ut dt > ut :( > dt )- limited memory bfgs this formula allows us to update ct- to ct and then compute ct ut in ( time while this quasi-newton approach is better than the ( cost of newton' methodit may be still too costly in large-scale applications insteadan approximate or limited memory bfgs updating can be achieved in (dtime the idea is to store few of the most recent pairs {dt in order to evaluate its action on vector ut without explicitly constructing and storing ct in computer memory this is possiblebecause updating to in ( requires only the pair and similarly computing ct from only requires the history of the updates dt which can be shown as follows define the matrices at via the backward recursion )at :ia - : > and observe that all matrix vector productsa = for can be computed efficiently via the backward recursion starting with qt ut : > - tt (
15,105
in addition to { }we will make use of the vectors { defined via the recursionr : - > - ( at the final iteration tthe bfgs updating formula ( can be rewritten in the formct > - ct- at- ut dt > by iterating the recursion ( backwards to we can writect > > > = that iswe can express ct in terms of the initial and the entire history of all bfgs values { }as claimed furtherwith the { computed via ( and ( )we can writect > > > = > > > = > > > = hencefrom the definition of the { in ( )we obtain ct > > = > > = > rt rt given and the history of all recent bfgs values { }hj= the computation of the quasinewton search direction -ch can be accomplished via the recursions ( and ( as summarized in algorithm note that if is diagonal matrixsay the identity matrixthen is cheap to compute and the cost of running algorithm is ( dthusfor fixed length of the bfgs historythe cost of the limited-memory bfgs updating grows linearly in dmaking it viable optimization algorithm in large-scale applications
15,106
algorithm limited-memory bfgs update inputbfgs history list { }hj= initial and input outputd -ch uwhere ct ut dt > ct- ut > ut dt > for hh do /backward recursion to compute - ui di ti > ui ti for do ui (ti > qdi /compute ( /compute recursion ( return -qthe value of -ch in summarya quasi-newton algorithm with limited-memory bfgs updating reads as follows algorithm quasi-newton minimization with limited-memory bfgs inputtraining set {(xi yi )}ni= initial weight matrices and bias vectors {wl bl } = =th activation functions {sl } = and history parameter outputthe parameters of the trained learner ut- /initialization while stopping condition is not met do (tht via algorithm compute `value ` ( (tht )and ut th ut ut- add ( to the bfgs history as the newest bfgs pair if the number of pairs in the bfgs history is greater than then remove the oldest pair from the bfgs history compute via algorithm using the bfgs historyc iand ut - while ` ( (tht )`value - dut do / /line-search along quasi-newton direction -ad tht+ tht - + return tht as the minimizer of the training loss adaptive gradient methods recall that the limited-memory bfgs method in the previous section determines search direction using the recent history of previously computed gradients {ut and input parameters {tht this is because the bfgs pairs {dt can be easily constructed from the identitiesdt tht tht- and ut ut- in other wordsusing only past gradient computations and with little extra computationit is possible to infer some of the second-order information
15,107
contained in the hessian matrix of ` (thin addition to the bfgs methodthere are other ways in which we can exploit the history of past gradient computations one approach is to use the normal approximation methodin which the hessian of ` at tht is approximated via ui > ( ht = - + where ut- + ut are the most recently computed gradients and is tuning parameter (for exampleg /hthe search direction is then given by - - ut which can be computed quickly in ( dtime either using the qr decomposition (exercises and )or the sherman-morrison algorithm this approach requires that we store the last gradient vectors in memory another approach that completely bypasses the need to invert hessian approximation bt is the adaptive gradient or adagrad methodin which we only store the diagonal of and use the search directionb )- / ut -diag( adagrad we can avoid storing any of the gradient history by instead using the slightly different search direction -ut vt where the vector vt is updated recursively via vt vt- ut ut with this updating of vt the difference between the vector vt and the diagonal of will be negligible the hessian more sophisticated version of adagrad is the adaptive moment estimation or adam methodin which we not only average the vectors {vt }but also average the gradient vectors {ut }as follows algorithm updating of search direction at iteration via adam inputut ut- vt- tht and parameters (ahv hu )equal toe ( - outputut  tht+ ut - hu vt ut hv - hv  ut ut ( - - vt vt ( hv  tht+ tht ut * - return ut vt tht+ here we divide two vectors componentwise adam
15,108
momentum method yet another computationally cheap approach is the momentum methodin which the steepest descent iteration ( is modified to tht+ tht at ut dt where dt tht tht- and is tuning parameter this strategy frequently performs better than the "vanillasteepest descent methodbecause the search direction is less likely to change abruptly numerical experience suggests that the vanilla steepest-descent algorithm and the levenberg-marquardt algorithm are effective for networks with shallow architecturesbut not for networks with deep architectures in comparisonthe stochastic gradient descent methodthe limited-memory bfgs algorithm or any of the adaptive gradient methods in this sectioncan frequently handle networks with many hidden layers (provided that any tuning parameters and initialization values are carefully chosen via experimentation examples in python in this section we provide two numerical examples in python in the first examplewe train neural network with the stochastic gradient descent method using the polynomial regression data from example and without using any specialized python packages in the second examplewe consider realistic application of neural network to image recognition and classification here we use the specialized open-source python package pytorch simple polynomial regression consider again the polynomial regression data set depicted in figure we use network with architecture [ [ in other wordswe have two hidden layers with neuronsresulting in learner with total of dim(th parameters to implement such neural networkwe first import the numpy and the matplotlib packagesthen read the regression problem data and define the feed-forward neural network layers neuralnetpurepython py import numpy as np import matplotlib pyplot as plt #%import data data np genfromtxt ('polyreg csv ',delimiter =',' data [, reshape - , data [, reshape - , network setup [ shape [ , , , size of layers len( - number of layers
15,109
nextthe initialize method generates random initial weight matrices and bias vecl tors {wl bl } = specificallyall parameters are initialized with values distributed according to the standard normal distribution def initialize (pw_sig )wb [[]]len( )[[]]len(pfor in range ( len( )) [ ]w_sig np random randn ( [ ] [ - ] [ ]w_sig np random randn ( [ ] return , , initialize (pinitialize weight matrices and bias vectors the following code implements the relu activation function from figure and the squared error loss note that these functions return both the function values and the corresponding gradients def relu( , )relu activation function value and derivative if =lreturn znp ones_like (zelseval np maximum ( ,zrelu function element -wise np array ( > dtype float derivative of relu element -wise return val def loss_fn ( , )return ( )** ( ys relu nextwe implement the feed-forward and backward-propagation algorithm herewe have implemented algorithm inside the backward-propagation loop def feedforward ( , , )azgr_s [ ]* + [ ]* + [ ]* + [ reshape - , for in range ( , + ) [lw[la[ - [laffine transformation [ ]gr_s[ls( [ ],lactivation function return azgr_s def backward ( , , , ) =len(ydelta [ ]* + dc_db dc_dw [ ]* + [ ]* + loss = for in range ( )loop over training examples azgr_s feedforward ( [ ,:twbcost gr_c loss_fn ( [ ] [ ]cost and gradient wrt loss +cost/
15,110
delta [lgr_s[lgr_c for in range ( , - ) , dci_dbl delta [ldci_dwl delta [la[ - ---sum up over samples ---dc_db [ldc_db [ldci_dbl / dc_dw [ldc_dw [ldci_dwl / delta [ - gr_s[ - [lt delta [lreturn dc_dw dc_db loss as explained in section it is sometimes more convenient to collect all the weight matrices and bias vectors {wl bl } = into single vector th consequentlywe code two functions that map the weight matrices and the bias vectors into single parameter vectorand vice versa def list vec ( , )converts list of weight matrices and bias vectors into one column vector b_stack np vstack ([ [ifor in range ( len( ))w_stack np vstack ( [iflatten (reshape - , for in range ( len( ))vec np vstack (b_stack w_stack ]return vec #%def vec list (vec )converts vector to weight matrices and bias vectors wb [[]]len( ,[[]]len(pp_count for in range ( len( ))construct bias vectors [lvecp_count :p_count + [ ]reshape - , p_count p_count [lfor in range ( len( ))construct weight matrices [lvecp_count :p_count [ ]* [ - ]reshape ( [ ]pl - ]p_count p_count ( [ ]* [ - ]return wb finallywe run the stochastic gradient descent for iterations using minibatch of size and constant learning rate of at batch_size lr beta list vec ( ,bloss_arr [
15,111
len(xnum_epochs print (epoch batch loss"print ("for epoch in range ( num_epochs + )batch_idx np random choice (nbatch_size batch_x xbatch_idx reshape - , batch_y =ybatch_idx reshape - , dc_dw dc_db loss backward ( , ,batch_x batch_y d_beta list vec (dc_dw dc_db loss_arr append (loss flatten ([ ]ifepoch == or np mod(epoch , == )print (epoch ,"",loss flatten ([ ]beta beta lrd_beta , vec list (beta ,pcalculate the loss of the entire training set dc_dw dc_db loss backward ( , , ,yprint (entire training set loss ",loss flatten ([ ]xx np arange ( , , y_preds np zeros_like (xxfor in range (len(xx))a__ feedforward (xx[ ], ,by_preds [ ] [lplt plot( , ' 'markersize label ' 'plt plot(np array (xx)y_preds ' ',label 'fit 'plt legend (plt xlabel (' 'plt ylabel (' 'plt show (plt plot(np array loss_arr )' 'plt xlabel ('iteration 'plt ylabel ('training loss 'plt show (epoch batch loss entire training set loss the left panel of figure shows trained neural network with training loss of approximately as seen from the right panel of figure the algorithm initially makes rapid progress until it settles down into stationary regime after iterations
15,112
fit batch loss output input iteration figure left panelthe fitted neural network with training loss of ` (gt right panelthe evolution of the estimated lossb ` (gt (th))over the steepest-descent iterations image classification in this sectionwe will use the package pytorchwhich is an open-source machine learning library for python pytorch can easily exploit any graphics processing unit (gpufor accelerated computation as an examplewe consider the fashion-mnist data set from each image according to its label specificallythe labels aret-shirttrouserpulloverdresscoatsandalshirtsneakerbagand ankle boot figure depicts typical ankle boot in the left panel and typical dress in the right panel to start withwe import the required libraries and load the fashion-mnist data set figure leftan ankle boot righta dress imageclassificationpytorch py import torch import torch nn as nn from torch autograd import variable import pandas as pd import numpy as np
15,113
import matplotlib pyplot as plt from torch utils data import dataset dataloader from pil import image import torch nn functional as ###############################################################data loader class ###############################################################class loaddata dataset )def __init__ (self fname transform =none)data pd read_csv fname self np array (data iloc [: :dtype =np uint reshape (- self np array (data iloc [: ]def __len__ (self)return len(self xdef __getitem__ (self idx)img self [idxlbl self [idxreturn (img lblload the image data train_ds loaddata ('fashionmnist /fashion mnist_train csv 'test_ds loaddata ('fashionmnist /fashion mnist_test csv 'set labels dictionary labels { 'tshirt ' 'trouser ' 'pullover ' 'dress ' 'coat ' 'sandal ' 'shirt ' 'sneaker ' 'bag ' 'ankle boot 'since an image input data is generally memory intensiveit is important to partition the data set into (mini-)batches the code below defines batch size of images and initializes the pytorch data loader objects these objects will be used for efficient iteration over the data set load the data in batches batch_size train_loader torch utils data dataloader dataset =train_ds batch_size batch_size shuffle =truetest_loader torch utils data dataloader dataset =test_ds batch_size batch_size shuffle =truenextto define the network architecture in pytorch all we need to do is define an instance of the torch nn module class choosing network architecture with good generalization properties can be difficult task herewe use network with two convolution layers (defined in the cnn_layer block) kerneland three hidden layers (defined in the flat_layer blocksince there are ten possible output labelsthe output layer has ten nodes more specificallythe first and the second convolution layers have and output channels combining this with the definition of the kernelwe conclude that the size
15,114
of the first flat hidden layer should besecond convolution layer } - ( { first convolution layer where the multiplication by follows from the fact that the second convolution layer has output channels having said thatthe flat_fts variable determines the number of output layers of the convolution block this number is used to define the size of the first hidden layer of the flat_layer block the rest of the hidden layers have neurons and we use the relu activation function for all layers finallynote that the forward method in the cnn class implements the forward pass define the network class cnn(nn module )def __init__ (self)super (cnn self__init__ (self cnn_layer nn sequential nn conv ( kernel_size = stride =( , ))nn relu (nn conv ( kernel_size = stride =( , ))nn relu (self flat_fts ((( - + - + ** * self flat_layer nn sequential nn linear (self flat_fts nn relu (nn linear ( nn relu (nn linear ( nn relu (nn linear ( )def forward (self )out self cnn_layer (xout out view (- self flat_fts out self flat_layer (outreturn out nextwe specify how the network will be trained we choose the device typenamelythe central processing unit (cpuor the gpu (if available)the number of training iterations (epochs)and the learning rate thenwe create an instance of the proposed convolution network and send it to the predefined device (cpu or gpunote how easily one can switch between the cpu or the gpu without major changes to the code in addition to the specifications abovewe need to choose an appropriate loss function and training algorithm herewe use the cross-entropy loss and the adam adaptive gradient algorithm once these parameters are setthe learning proceeds to evaluate the gradient of the loss function via the back-propagation algorithm
15,115
learning parameters num_epochs learning_rate device torch device ('cpu 'use this to run on cpu device torch device ('cuda 'use this to run on gpu instance of the conv net cnn cnn (cnn todevice device #loss function and optimizer criterion nn crossentropyloss (optimizer torch optim adam(cnn parameters (lrlearning_rate the learning loop losses [for epoch in range ( num_epochs + )for (images labels in enumerate train_loader )images variable images float ()todevice device labels variable labels todevice device optimizer zero_grad (outputs cnnimages loss criterion (outputs labels loss backward (optimizer step (losses append (loss item ()ifepoch == or epoch = )print (epoch "epoch "training loss"loss item ()evaluate on the test set cnn eval (correct total for images labels in test_loader images variable images float ()todevice device outputs cnnimages _predicted torch maxoutputs data total +labels size ( correct +predicted cpu (=labels sum (print ("test accuracy of the model on the , training test images "( correct item (total ),"%"plot plt rc('text 'usetex =trueplt rc('font 'family ='serif ',size = plt tight_layout (plt plot(np array losses )[ lenlosses )]plt xlabel ( 'iteration }',fontsize = plt ylabel ( 'batch loss}',fontsize = plt subplots_adjust (top = plt show (
15,116
epoch training loss epoch training loss epoch training loss epoch training loss epoch training loss epoch training loss test accuracy of the model on the , training test images finallywe evaluate the network performance using the test data set typical minibatch loss as function of iteration is shown in figure and the proposed neural network achieves about accuracy on the test set batch loss iteration figure the batch loss history further reading popular book written by some of the pioneers of deep learning is [ for an excellent and gentle introduction to the intuition behind neural networkswe recommend [ summary of many effective gradient descent methods for training of deep networks is given in [ an early resource on the limited-memory bfgs method is [ ]and more recent resource includes [ ]which makes recommendations on the best choice for the length of the bfgs history (that isthe value of the parameter hexercises show that the softmax function exp(zsoftmax exp(zk satisfies the invariance propertysoftmax(zsoftmax( )for any constant
15,117
projection pursuit is network with one hidden layer that can be written asprojection pursuit (xs (ox)where is univariate smoothing cubic spline if we use squared-error loss with tn {yi xi }ni= we need to minimize the training loss  yi (oxi = with respect to and all cubic smoothing splines this training of the network is typically tackled iteratively in manner similar to the em algorithm in particularwe iterate ( the following steps until convergence (agiven the missing data ot compute the spline by training cubic smoothing spline on {yi > xi the smoothing coefficient of the spline may be determined as part of this step (bgiven the spline function compute the next projection vector ot+ via iterative reweighted least squaresot+ argmin (et xb)st (et xb) ( iterative reweighted least squares where yi ( > xi et, :ot xi ( > xi is the adjusted responseand / diag( ( > ) ( > xn )is diagonal matt rix apply taylor' theorem to the function and derive the iterative reweighted least-squares optimization program ( suppose that in the stochastic gradient descent method we wish to repeatedly draw minibatches of size from tn where we assume that for some large integer instead of repeatedly resampling from tn an alternative is to reshuffle tn via random permutation and then advance sequentially through the reshuffled training set to construct non-overlapping minibatches single traversal of such reshuffled training set is called an epoch the following pseudo-code describes the procedure epoch
15,118
algorithm stochastic gradient descent with reshuffling inputtraining set tn {(xi yi )}ni= initial weight matrices and bias vectors {wl bl } = th activation functions {sl } = learning rates { outputthe parameters of the trained learner and epoch while stopping condition is not met do iid draw un ( let be the permutation of { nthat satisfies up upn (xi yi (xpi ypi for /reshuffle tn for do jn ` = - ) + loss(yi (xi th) tht+ tht at th` (tht - + epoch epoch /number of reshuffles or epochs return tht as the minimizer of the training loss write python code that implements the stochastic gradient descent with data reshufflingand use it to train the neural net in section denote the pdf of the ( sdistribution by phs (*)and let phs ( ( phs ( ln dx phs ( rd be the kullback-leibler divergence between the densities of the ( and ( distributions on rd show that - - ( tr( - ln | ( ( hencededuce the formula in ( suppose that we wish to compute the inverse and log-determinant of the matrix in uuwhere is an matrix with show that (in uu)- in qn > where qn contains the first rows of the ( hx matrix in the qr factorization of the ( hx matrixu qr ih in additionshow that ln |in uuhi= ln rii where {rii are the diagonal elements of the matrix
15,119
suppose that [ uh- ]where all rn are column vectors and we have computed (in uu)- via the qr factorization method in exercise if the columns of matrix are updated to [ uh- uh ]show that the inverse (in uu)- can be updated in ( ntime (rather than computed from scratch in ( ntimededuce that the computing cost of updating the hessian approximation ( is the same as that for the limited-memory bfgs algorithm in your solution you may use the following facts from [ suppose we are given the and factors in the qr factorization of matrix rnxh if row/column is added to matrix athen the and factors need not be recomputed from scratch (in ( ntime)but can be updated efficiently in ( ntime similarlyif row/column is removed from matrix athen the and factors can be updated in ( time suppose that rnxh has its -th column replaced with wgiving the updated (aif rh denotes the unit-length vector such that ek kek and kw vk ( ve+er+: show that eu uu rr>rr> [hintyou may find exercise in useful (blet :(ih uu)- use the woodbury identity ( to show that eu )- in - rr>rr>- (in (csuppose that we have stored in computer memory use algorithm and parts eu )- in (( + ) (aand (bto write pseudo-code that updates (in +uu)- to (in + computing time equation ( gives the rank-two bfgs update of the inverse hessian ct- to ct instead of using two-rank updatewe can consider one-rank updatein which ct- is updated to ct by the general rank-one formulact ct- ut rt > find values for the scalar ut and vector rt such that ct satisfies the secant condition ct dt show that the bfgs formula ( can be written as dc duddwhere :( )-
15,120
show that the bfgs formula ( is the solution to the constrained optimization problemcbfgs argmin ( ) subject to da awhere is the kullback-leibler discrepancy defined in ( on the other handshow that the dfp formula ( is the solution to the constrained optimization problemcdfp argmin subject to da ad( consider again the logistic regression model in exercise which used iterative reweighted least squares for training the learner repeat all the computationsbut this time using the limited-memory bfgs algorithm which training algorithm converges faster to the optimal solution download the seeds_dataset txt data set from the book' github sitewhich contains independent examples the categorical output (responsehere is the type of wheat grainkamarosaand canadian (encoded as and )so that the seven continuous features (explanatory variablesare measurements of the geometrical properties of the grain (areaperimetercompactnesslengthwidthasymmetry coefficientand length of kernel groovethusx (which does not include the constant feature and the multi-logit pre-classifier in example can be written as (xsoftmax(wx )where and implement and train this pre-classifier on the first examples of the seeds data set usingfor examplealgorithm use the remaining examples in the data set to estimate the generalization risk of the learner using the crossentropy loss [hintuse the cross-entropy loss formulas from example in exercise abovewe train the multi-logit classifier using weight matrix and bias vector repeat the training of the multi-logit modelbut this time keeping as an arbitrary constant (say )and thus setting to be "referenceclass this has the effect of removing node from the output layer of the networkgiving weight matrix and bias vector of smaller dimensions than in ( consider again example where we used softmax output function sl in conl and junction with the cross-entropy lossc(thln gy+ ( thfind formulas for zl henceverify thatsl ( they+ zl where ei is the unit length vector with an entry of in the -th position derive the formula ( for diagonal hessian update in quasi-newton method for minimization in other wordsgiven current minimizer xt of ( ) diagonal matrix of approximating the hessian of and gradient vector (xt )find the solution to the constrained optimization programmin (xt xt auaa subject toa da is diagonalwhere is the kullback-leibler distance defined in ( (see exercise
15,121
consider again the python implementation of the polynomial regression in section where the stochastic gradient descent was used for training using the polynomial regression data setimplement and run the following four alternative training methods(athe steepest-descent algorithm (bthe levenberg-marquardt algorithm in conjunction with algorithm for computing the matrix of jacobi(cthe limited-memory bfgs algorithm (dthe adam algorithm which uses past gradient values to determine the next search direction for each training algorithmusing trial and errortune any algorithmic parameters so that the network training is as fast as possible comment on the relative advantages and disadvantages of each training/optimization method for examplecomment on which optimization method makes rapid initial progressbut gets trapped in suboptimal solutionand which method is slowerbut more consistent in finding good optima consider again the pytorch code in section repeat all the computationsbut this time using the momentum method for training of the network comment on which method is preferablethe momentum or the adam method
15,122
inear lgebra and unctional nalysis the purpose of this appendix is to review some important topics in linear algebra and functional analysis we assume that the reader has some familiarity with matrix and vector operationsincluding matrix multiplication and the computation of determinants vector spacesbasesand matrices linear algebra is the study of vector spaces and linear mappings vectors areby definitionelements of some vector space and satisfy the usual rules of addition and scalar multiplicatione vector space if and vthen ax by for all ab (or cwe will be dealing mostly with vectors in the euclidean vector space rn for some that iswe view the points of rn as objects that can be added up and multiplied with scalare ( ( ( for points in sometimes it is convenient to work with the complex vector space cn instead of rn see also section vectors vk are called linearly independent if none of them can be expressed as linear combination of the othersthat isif an vn then it must hold that ai for all linearly independent definition basis of vector space set of vectors { vn is called basis of the vector space if every vector can be written as unique linear combination of the vectors in bbasis an vn the (possibly infinitenumber is called the dimension of dimension
15,123
using basis of vwe can thus represent each vector as row or column of numbers [ an or ( an standard basis transpose typicallyvectors in rn are represented via the standard basisconsisting of unit vectors (pointse ( )en ( as consequenceany point ( xn rn can be representedusing the standard basisas row or column vector of the form ( abovewith ai xi we will also write [ xn ]for the corresponding column vectorwhere denotes the transpose to avoid confusionwe will use the convention from now on that generic vector is always represented via the standard basis as column vector the corresponding row vector is denoted by xexample (linear transformationtake the matrix - - it transforms the two basis vectors [ ]and [ ]shown in red and blue in the left panel of figure to the vectors [ - ]and [ - ]shown on the right panel similarlythe points on the unit circle are transformed to an ellipse rank matrix can be viewed as an array of rows and columns that defines linear transformation from rn to rm (or for complex matricesfrom cn to cm the matrix is said to be square if if an are the columns of athat isa [ an ]and if [ xn ]then ax xn an in particularthe standard basis vector ek is mapped to the vector ak we sometimes use the notation [ai ]to denote matrix whose (ij)-th element is ai when we wish to emphasize that matrix is real-valued with rows and columnswe write rmxn the rank of matrix is the number of linearly independent rows orequivalentlythe number of linearly independent columns matrix linear transformation - - - - - - - - figure linear transformation of the unit circle
15,124
suppose [ an ]where the {ai form basis of rn take any vector [ xn ]> with respect to the standard basis (we write subscript to stress thisthen the representation of this vector with respect to is simply - xwhere - is the inverse of athat isthe matrix such that aa- - in where in is the -dimensional identity matrix to see thisnote that - ai gives the -th unit vector representationfor nand recall that each vector in rn is unique linear combination of these basis vectors example (basis representationconsider the matrix - - awith inverse / - / inverse ( the vector [ ]> in the standard basis has representation - [- ]> in the basis consisting of the columns of namely ay the transpose of matrix [ai is the matrix [ ji ]that isthe (ij)-th element of ais the ji)-th element of the trace of square matrix is the sum of its diagonal elements useful result is the following cyclic property transpose trace theorem cyclic property the trace is invariant under cyclic permutationstr(abctr(bcatr(cabproofit suffices to show that tr(deis equal to tr(edfor any matrix [di and matrix [ei the diagonal elements of de are nj= di ji and the diagonal elements of ed are mi= ji di they sum up to the same number pm pn = = di ji square matrix has an inverse if and only if its columns (or rowsare linearly independent this is the same as the matrix being of full rankthat isits rank is equal to the number of columns an equivalent statement is that its determinant is not zero the determinant of an matrix [aij is defined as det( : (- (pn api , determinant ( = where the sum is over all permutations ( pn of ( )and (pis the number of pairs (ijfor which for examplez( for the pairs ( )( )( the determinant of diagonal matrix - matrix with only zero elements off the diagonal -is simply the product of its diagonal elements diagonal matrix
15,125
geometricallythe determinant of square matrix [ an is the (signedvolume of the parallelepiped ( -dimensional parallelogramdefined by the columns an that isthe set of points ni= ai ai where ai the easiest way to compute determinant of general matrix is to apply simple operations to the matrix that potentially reduce its complexity (as in the number of non-zero elementsfor example)while retaining its determinantadding multiple of one column (or rowto anotherdoes not change the determinant multiplying column (or rowwith number multiplies the determinant by the same number swapping two rows changes the sign of the determinant by applying these rules repeatedly one can reduce any matrix to diagonal matrix it follows then that the determinant of the original matrix is equal to the product of the diagonal elements of the resulting diagonal matrix multiplied by known constant example (determinant and volumefigure illustrates how the determinant of matrix can be viewed as signed volumewhich can be computed by repeatedly applying the first rule above herewe wish to compute the area of red parallelogram determined by the matrix given in ( in particularthe corner points of the parallelogram correspond to the vectors [ ][ ][ ]and [ ] - figure the volume of the red parallelogram can be obtained by number of shear operations that do not change the volume adding - times the first column of to the second column gives the matrix -
15,126
corresponding to the blue parallelogram the linear operation that transforms the red to the blue parallelogram can be thought of as succession of two linear transformations the first is to transform the coordinates of points on the red parallelogram (in standard basisto the basis formed by the columns of secondrelative to this new basiswe apply the matrix above note that the input of this matrix is with respect to the new basiswhereas the output is with respect to the standard basis the matrix for the combined operation is now # - - - ba - / - / - which maps [ ]to [ ](does not changeand [ ]to [ - ]we say that we apply shear in the direction [ ]the significance of such an operation is that shear does not alter the volume of the parallelogram the second (blueparallelogram has an easier formbecause one of the sides is parallel to the -axis by applying another shearin the direction [ - ]we can obtain simple (greenrectanglewhose volume is in matrix termswe add / times the second column of to the first column of bto obtain the matrix - which is diagonal matrixwhose determinant is - corresponding to the volume of all the parallelograms theorem summarizes number of useful matrix rules for the concepts that we have discussed so far we leave the proofswhich typically involves "writing outthe equationsas an exercise for the readersee also [ theorem useful matrix rules (ab)ba (ab)- - - ( - )( )- = - det(abdet(adet( xax tr axxq det(ai aii if [ai is triangular nextconsider an matrix for which the matrix inverse fails to exist that isa is either non-square ( por its determinant is instead of the inversewe can use its so-called pseudo-inversewhich always exists shear
15,127
definition moore-penrose pseudo-inverse moore-penrose pseudo-inverse the moore-penrose pseudo-inverse of real matrix rnxp is defined as the unique matrix ar pxn that satisfies the conditions aaa aaaa (aa)aa (aa)aa we can write aexplicitly in terms of when has full column or row rank for examplewe always have aaaa(aa)((aa) )( )aleft pseudo-inverse ( if has full column rank pthen (aa)- existsso that from ( it follows that (aa)- athis is referred to as the left pseudo-inverseas aa similarlyif has full row rank nthat is(aa)- existsthen it follows from aaa(aa) ( (aa))aright pseudo-inverse that aa(aa)- this is the right pseudo-inverseas aain finallyif is of full rank and squarethen aa- inner product inner product the (euclideaninner product of two real vectors [ xn ]and [ yn ]is defined as the number hxyi xi yi xy = here is the matrix multiplication of the ( nmatrix xand the ( matrix the inner product induces geometry on the linear space rn allowing for the definition of lengthangleand so on the inner product satisfies the following properties hax byzi hxzi hyzi hxyi hyxi hxxi hxxi if and only if orthogonal euclidean norm vectors and are called perpendicular (or orthogonalif hxyi the euclidean norm (or lengthof vector is defined as || | xn hxxi
15,128
if and are perpendicularthen pythagorastheorem holds|| || hx yx yi hxxi hxyi hyyi || || || || pythagorastheorem ( basis { vn of rn in which all the vectors are pairwise perpendicular and have norm is called an orthonormal (short for orthogonal and normalizedbasis for examplethe standard basis is orthonormal orthonormal theorem orthonormal basis representation if { vn is an orthonormal basis of rn then any vector rn can be expressed as hxv hxvn vn ( proofobserve thatbecause the {vi form basisthere exist unique an such that an vn by the linearity of the inner product and the orthonormality of the {vi it follows that hxv ai vi an matrix whose columns form an orthonormal basis is called an orthogonal matrix note that for an orthogonal matrix [ vn ]we have orthogonal matrix > > vn vv [ vn vn vn vn vn vn hencev- vnote also that an orthogonal transformation is length preservingthat isvx has the same length as this follows from length preserving ||vx|| hvxvxi xvvx xx || || complex vectors and matrices instead of the vector space rn of -dimensional real vectorsit is sometimes useful to consider the vector space cn of -dimensional complex vectors in this case the adjoint or conjugate transpose operation (*replaces the transpose operation (>this involves the usual transposition of the matrix or vector with the additional step that any complex number is replaced by its complex conjugate for exampleif xand aa then [ and the qualifier "orthogonalfor such matrices has been fixed by history better term would have been "orthonormaladjoint
15,129
the (euclideaninner product of and (viewed as column vectorsis now defined as hxyi hermitian unitary xi yi = which is no longer symmetrichxyi hyxi note that this generalizes the real-valued inner product the determinant of complex matrix is defined exactly as in ( as consequencedet(adet(aa complex matrix is said to be hermitian or self-adjoint if aaand unitary if aa (that isif aa- for real matrices "hermitianis the same as "symmetric"and "unitaryis the same as "orthogonala orthogonal projections let { uk be set of linearly independent vectors in rn the set span { uk { ak uk ak }linear subspace orthogonal complement orthogonal projection matrix is called the linear subspace spanned by { uk the orthogonal complement of vdenoted by is the set of all vectors that are orthogonal to vin the sense that hwvi for all the matrix such that px xfor all vand px for all is called the orthogonal projection matrix onto suppose that [ uk has full rankin which case uu is an invertible matrix the orthogonal projection matrix onto span { uk is then given by (uu)- unamelysince pu uthe matrix projects any vector in onto itself moreoverp projects any vector in onto the zero vector using the pseudo-inverseit is possible to specify the projection matrix also for the case where is not of full rankleading to the following theorem theorem orthogonal projection let [ uk thenthe orthogonal projection matrix onto span{ uk is given by ( where uis the (rightpseudo-inverse of proofby property of definition we have pu uuu uso that projects any vector in onto itself moreoverp projects any vector in onto the zero vector note that in the special case where uk above form an orthonormal basis of vthen the projection onto is very simple to describenamely we have px uu = hxui ui (
15,130
for any point rn the point in that is closest to is its orthogonal projection pxas the following theorem shows theorem orthogonal projection and minimal distance let { uk be an orthonormal basis of subspace and let be the orthogonal projection matrix onto the solution to the minimization program min kx yk yv is px that ispx is closest to proofwe can write each point as kx yk = ai ui = pk = ai ui consequently ai ui kxk = ai hxui = minimizing this with respect to the {ai gives ai hxui ii in view of ( )the optimal is thus px eigenvalues and eigenvectors let be an matrix if av lv for some number and non-zero vector vthen is called an eigenvalue of with eigenvector if (lvis an (eigenvalueeigenvectorpairthe matrix li maps any multiple of to the zero vector consequentlythe columns of li are linearly dependentand hence its determinant is this provides way to identify the eigenvaluesnamely as the different roots lr of the characteristic polynomial eigenvalue eigenvector characteristic polynomial det(li ( ) ( lr )ar where ar the integer ai is called the algebraic multiplicity of li the eigenvectors that correspond to an eigenvalue li lie in the kernel or null space of the matrix li athat isthe linear space of vectors such that (li ) this space is called the eigenspace of li its dimensiondi { }is called the geometric multiplicity of li it always holds that di ai if di nthen we can construct basis for rn consisting of eigenvectorsas illustrated next example (linear transformation (cont )we revisit the linear transformation in figure where - / - the characteristic polynomial is ( )( / with roots - / / - and - / / the corresponding unit eigenvectors are [ - ]and [ - ]the eigenspace corresponding to is algebraic multiplicity null space geometric multiplicity
15,131
span { { rand the eigenspace corresponding to is span { the algebraic and geometric multiplicities are in this case any pair of vectors taken from and forms basis for figure shows how and are transformed to av and av respectively - - - - figure the dashed arrows are the unit eigenvectors (blueand (redof matrix their transformed values av and av are indicated by solid arrows semi-simple diagonalizable matrix for which the algebraic and geometric multiplicities of all its eigenvalues are the same is called semi-simple this is equivalent to the matrix being diagonalizablemeaning that there is matrix and diagonal matrix such that vdv- eigendecomposition to see that this so-called eigen-decomposition holdssuppose is semi-simple matrix with eigenvalues } lr { } | { dr let be the diagonal matrix whose diagonal elements are the eigenvalues of aand let be matrix whose columns are linearly independent eigenvectors corresponding to these eigenvalues thenfor each (eigenvalueeigenvectorpair (lv)we have av lv hencein matrix notationwe have av vdand so vdv- leftand right-eigenvectors the eigenvector as defined in the previous section is called right-eigenvectoras it lies on the right of in the equation av lv if is complex matrix with an eigenvalue lthen the eigenvalue' complex conjugate is an eigenvalue of ato see thisdefine :li and :li asince is an eigenvaluewe have det( applying the identity det(bdet( )we see that
15,132
therefore det( and hence that is an eigenvalue of alet be an eigenvector corresponding to thenaw lw orequivalentlywa lwfor this reasonwe call wthe left-eigenvector of for eigenvalue if is (right-eigenvector of athen its adjoint vis usually not left-eigenvectorunless aa aa(such matrices are called normala real symmetric matrix is normalhoweverthe important property holds that leftand right-eigenvectors belonging to different eigenvalues are orthogonal namelyif wis left-eigenvalue of and right-eigenvalue of then wv wav wvwhich can only be true if wv theorem schur triangulation for any complex matrix athere exists unitary matrix such that - au is upper triangular proofthe proof is by induction on the dimension of the matrix clearlythe statement is true for as is simply complex number and we can take equal to suppose that the result is true for dimension we wish to show that it also holds for dimension any matrix always has at least one eigenvalue with eigenvector vnormalized to have length let be any unitary matrix whose first column is such matrix can always be constructed as is unitarythe first row of - is vand - au is of the form { for some matrix by the induction hypothesisthere exists unitary matrix and an upper triangular matrix such that - bw nowdefine : then## - au - - bw - which is upper triangular of dimension since uv is unitarythis completes the inductionand hence the result is true for all the theorem above can be used to prove an important property of hermitian matricesi matrices for which aa after specifying we can complete the rest of the unitary matrix via the gram-schmidt procedurefor examplesee section lefteigenvector normal matrix
15,133
eigenvalues and eigenvectors theorem eigenvalues of hermitian matrix any hermitian matrix has real eigenvalues the corresponding matrix of normalized eigenvectors is unitary matrix prooflet be hermitian matrix by theorem there exists unitary matrix such that - au twhere is upper triangular it follows that the adjoint ( - au)tis lower triangular however( - au) - ausince aa and uu- hencet and tmust be the samewhich can only be the case if is real diagonal matrix since au duthe diagonal elements are exactly the eigenvalues and the corresponding eigenvectors are the columns of in particularthe eigenvalues of real symmetric matrix are real we can now repeat the proof of theorem with real eigenvalues and eigenvectorsso that there exists an orthogonal matrix such that - aq qaq the eigenvectors can be chosen as the columns of qwhich form an orthonormal basis this proves the following theorem theorem real symmetric matrices are orthogonally diagonizable any real symmetric matrix can be written as qdqwhere is the diagonal matrix of (realeigenvalues and is an orthogonal matrix whose columns are eigenvectors of example (real symmetric matrices and ellipsesas we have seenlinear transformations map circles into ellipses we can use the above theory for real symmetric matrices to identify the principal axes considerfor examplethe transformation with matrix [ - / - in ( point on the unit circle is mapped to point ax since for such points kxk xx we have that satisfies ( - ) - which gives the equation for the ellipse let be the orthogonal matrix of eigenvectors of the symmetric matrix ( - ) - (aa)- so (aa)- for some diagonal matrix taking the inverse on both sides of the previous equationwe have qaaq - which shows that is also the matrix of eigenvectors of aathese eigenvectors point precisely in the direction of the principal axesas shown in figure it turns outsee section that the square roots of the eigenvalues of aahere approximately and correspond to the sizes of the principal axes of the ellipseas illustrated in figure
15,134
- - - figure the eigenvectors and eigenvalues of aadetermine the principal axes of the ellipse the following definition generalizes the notion of positivity of real variable to that of (hermitianmatrixproviding crucial concept for multivariate differentiation and optimizationsee appendix definition positive (semi)definite matrix hermitian matrix is called positive semidefinite (we write if haxxi for all it is called positive definite (we write if haxxi for all the positive (semi)definiteness of matrix can be directly related to the positivity of its eigenvaluesas followstheorem eigenvalues of positive semidefinite matrix all eigenvalues of positive semidefinite matrix are non-negative and all eigenvalues of positive definite matrix are strictly positive prooflet be positive semidefinite matrix by theorem the eigenvalues of are all real suppose is an eigenvalue with eigenvector as is positive semidefinitewe have havvi hvvi kvk which can only be true if similarlyfor positive definite matrixl must be strictly greater than corollary any real positive semidefinite matrix can be written as bbfor some real matrix converselyfor any real matrix bthe matrix bbis positive semidefinite positive semidefinite
15,135
proofthe matrix is both hermitian (by definitionand real (by assumptionand hence it is symmetric by theorem we can write qdqwhere is the diagonal matrix of (realeigenvalues of by theorem all eigenvalues are non-negativeand thus their square root is real-valued nowdefine dwhere is defined as the diagonal matrix whose elements are the square roots of the eigenvalues of diagonal thenbb dq qdqa the converse statement follows from the fact that xbbx kbxk for all matrix decompositions matrix decompositions are frequently used in linear algebra to simplify proofsavoid numerical instabilityand to speed up computations we mention three important matrix decompositions( )luqrand svd ( )lu decomposition every invertible matrix can be written as the product of three matricesa plupermutation matrix plu decomposition ( where is lower triangular matrixu an upper triangular matrixand permutation matrix permutation matrix is square matrix with single in each row and columnand zeros otherwise the matrix product pb simply permutes the rows of matrix andlikewisebp permutes its columns decomposition of the form ( is called plu decomposition as permutation matrix is orthogonalits transpose is equal to its inverseand so we can write ( as pa lu the decomposition is not uniqueand in many cases can be taken to be the identity matrixin which case we speak of the lu decomposition of aalso called the lr for left-right (triangulardecomposition plu decomposition of an invertible matrix can be obtained recursively as follows the first step is to swap the rows of such that the element in the first column and first row of the pivoted matrix is as large as possible in absolute value write the resulting matrix as where is the permutation matrix that swaps the first and -th rowwhere is the row that contains the largest element in the first column nextadd the matrix - [ > / to the last rows of to obtain the matrix > > = > / in effectwe add some multiple of the first row to each of the remaining rows in order to obtain zeros in the first columnexcept for the first element we now apply the same procedure to as we did to and then to subsequent smaller matrices an-
15,136
swap the first row with the row having the maximal absolute value element in the first column make every other element in the first column equal to by adding appropriate multiples of the first row to the other rows suppose that at has plu decomposition pt lt ut then it is easy to check that ## at > pt- ( pt > ct /at lt ut { { { pt- lt- ut- is plu decomposition of at- since the plu decomposition for the scalar an- is trivialby working backwards we obtain plu decomposition of example (plu decompositiontake - our goal is to modify via steps and above so as to obtain an upper triangular matrix with maximal elements on the diagonal we first swap the first and second row nextwe add - / times the first row to the third row and / times the second row to the third row - - - - - - - / / the final matrix is and in the process we have applied the permutation matrices using the recursion ( we can now recover and namelyat the final iteration we have and / and subsequently - / / - / observing that [ ] - and / plu decompositions can be used to solve large systems of linear equations of the form ax efficientlyespecially when such an equation has to be solved for many different this is done by first decomposing into pluand then solving two triangular systems ly pb ux
15,137
forward substitution backward substitution the first equation can be solved efficiently via forward substitutionand the second via backward substitutionas illustrated in the following example example (solving linear equations with an lu decompositionlet plu be the same as in example we wish to solve ax [ ]firstsolving / - / givesy and / / / by forward substitution next - / / gives / - / / and ( / )/ - / so [- ]/ woodbury identity lu (or more generally pludecompositions can also be applied to block matrices starting point is the following lu decomposition for general matrix# / bc/ which holds as long as this can be seen by simply writing out the matrix product the block matrix generalization for matrices rnxn rnxk rkxn rkxk is #in - onxk ( : ca- okxn ik provided that is invertible (againwrite out the block matrix productherewe use the notation pxq to denote the matrix of zeros we can further rewrite this as##in onxk onxk in - sca- ik okxn ca- okxn ik thusinverting both sideswe obtain #- #- #- in - onxk in onxk - okxn ik okxn ca- ca- ik inversion of the above block matrices gives (again write out##in onxk in - - - onxk - okxn ik okxn ( ca- )- -ca- ik ( assuming that is invertiblewe could also perform block ul (as opposed to ludecomposition# bd- in onxk ( okxn - ik
15,138
whichafter similar calculation as the one aboveyields ##in onxk ( bd- )- onxk in -bd- - ik - - ik okxn - okxn ( the upper-left block of - from ( must be the same as the upper-left block of - from ( )leading to the woodbury identity( bd- )- - - ( ca- )- ca- ( det( bd- cdet(ddet(adet( ca- ( woodbury identity from ( and the fact that the determinant of product is the product of the determinantswe see that det(sdet(adet( ca- bsimilarlyfrom ( we have det(sdet( bd- cdet( )leading to the identity the following special cases of ( and ( are of particular importance theorem sherman-morrison formula suppose that rnxn is invertible and xy rn thendet( xydet( )( ya- xif in addition ya- - then the sherman-morrison formula holds( xy)- - shermanmorrison formula - xya- ya- prooftake xc -yand in ( and ( one important application of the sherman-morrison formula is in the efficient solution of the linear system ax bwhere is an matrix of the forma > = for some column vectors rn and diagonal (or otherwise easily invertiblematrix such linear systems arisefor examplein the context of ridge regression and optimization to see how the sherman-morrison formula can be exploiteddefine the matrices via the recursionak ak- ak > application of theorem for yields the identities: - - ak- - - - ak ak ak- > - - ak |ak |ak- > - - ak here |ais shorthand notation for det(
15,139
thereforeby evolving the recursive relationships up until pwe obtain- - | | - - - - = > - - > - - = these expressions will allow us to easily compute - - and | | provided the following quantities are availableckj : - - sinceby theorem we can write- - - ak- - - - ak- ak- ak- > - - - ak- jthe quantities {ckj can be computed from the recursionc - jj > - ck- ckj ck- ck- , - > - ck- , - pj kp ( observe that this recursive computation takes ( ntime and that once {ckj are availablewe can express - and |aas- - | | jj >jj = > jj > jj = in summarywe have proved the following theorem sherman-morrison recursion the inverse and determinant of the matrix respectively bypp = ak ak are given - - - cd det(adet( det( )where rnxp and pxp are the matrices : , , :diag > , > , and all the { , are computed from the recursion ( in ( ntime
15,140
as consequence of theorem the solution to the linear system ax can be computed in ( ntime via- - cd [ bif pthe sherman-morrison recursion can frequently be much faster than the ( direct solution via the lu decomposition method in section in summarythe following algorithm computes the matrices and in theorem via the recursion ( algorithm sherman-morrison recursion inputeasily invertible matrix and column vectors - outputmatrices and such that cd- ca- - ck ak for (assuming is diagonal or easily invertible matrix for do dk > ck for do > ck cj cj dk > [ diag( return and finallynote that if is diagonal matrix and we only store the diagonal elements of and (as opposed to storing the full matrices and )then the storage or memory requirements of algorithm are only ( na cholesky decomposition if is real-valued positive definite matrix (and therefore symmetric) covariance matrixthen an lu decomposition can be achieved with matrices and ltheorem cholesky decomposition real-valued positive definite matrix [ai rnxn can be decomposed as llwhere the real lower triangular matrix [lk satisfies the recursive formula - ak = ji lki lk - = ji for and where = ji lki : (
15,141
proofthe proof is by inductive construction for nlet ak be the left-upper submatrix of an with :[ ]we have > ae by the positive-definiteness of it follows that suppose that ak- has cholesky factorization lk- > - with lk- having strictly positive diagonal elementswe can construct cholesky factorization of ak as follows first write lk- > - ak- ak > - akk and propose lk to be of the form lk- lk lk- lkk for some vector lk- rk- and scalar lkk for which it must hold that #lk- > - ak- lk- lk- lk- > - akk lk- lkk lkk to establish that such an lk- and lkk existwe must verify that the set of equations lk- lk- ak- lk- lk- lkk akk ( has solution the system lk- lk- ak- has unique solutionbecause (by assumptionlk- is lower diagonal with strictly positive entries down the main diagonal and we can solve for lk- using forward substitutionlk- - - ak- we can solve the second equation as lkk akk klk provided that the term within the square root is positive we demonstrate this using the fact that is positive definite matrix in particularfor rn of the form [ > ]where is non-zero ( )-dimensional vector and non-zero numberwe have # lk- > - ak- ax kl> - > ak- akk > - akk now take - - - lk- to obtain ax (akk klk- therefore( can be uniquely solved as we have already solved it for we can solve it for any nleading to the recursive formula ( and algorithm below an implementation of cholesky' decomposition that uses the notation in the proof of theorem is the following algorithmwhose running cost is ( algorithm cholesky decomposition inputpositive-definite matrix an with entries {ai outputlower triangular ln such that ln > an for do ak- [ ak- , ] lk- - (computed in ( time via forward substitutionp - - lkk akk lk- lk- lk- lk lk- lkk return ln
15,142
qr decomposition and the gram-schmidt procedure let be an matrixwhere thenthere exists matrix rnxp satisfying qq and an upper triangular matrix pxp such that qr this is the qr decomposition for real-valued matrices when has full column ranksuch decomposition can be obtained via the gram-schmidt procedurewhich constructs an orthonormal basis { of the column space of spanned by { }in the following way (see also figure ) take /ka let be the projection of onto span { that isp hu now take ( )/ka this vector is perpendicular to and has unit length let be the projection of onto span { that isp hu +hu now take ( )/ka this vector is perpendicular to both and and has unit length continue this process to obtain - figure illustration of the gram-schmidt procedure at the end of the procedurea set { of orthonormal vectors are obtained consequentlyas result of theorem aj ha ui ui { = ri pgram-schmidt
15,143
for some numbers ri ii denoting the corresponding upper triangular matrix [ri by rwe have in matrix notationr qr [ [ ar pp which yields qr decomposition the qr decomposition can be used to efficiently solve least-squares problemsthis will be shown shortly it can also be used to calculate the determinant of the matrix awhenever is square namelydet(adet(qdet(rdet( )and since is triangularits determinant is the product of its diagonal elements there exist various improvements of the gram-schmidt process (for examplethe householder transformation [ ]that not only improve the numerical stability of the qr decompositionbut also can be applied even when is not full rank an important application of the qr decomposition is found in solving the least-squares problem in ( ntimeminp kxb yk br for some rnxp (modelmatrix using the defining properties of the pseudo-inverse in definition one can show that kxxy yk kxb yk for any in other wordsb : minimizes kxb yk if we have the qr decomposition qrthen numerically stable way to calculate with an ( ncost is via (qr) rqy rqy if has full column rankthen rr- note that while the qr decomposition is the method of choice for solving the ordinary least-squares regression problemthe sherman-morrison recursion is the method of choice for solving the regularized least-squares (or ridgeregression problem singular value decomposition singular value decomposition one of the most useful matrix decompositions is the singular value decomposition (svdtheorem singular value decomposition any (complexmatrix matrix admits unique decomposition usvwhere and are unitary matrices of dimension and nrespectivelyand is real diagonal matrix if is realthen and are both orthogonal matrices proofwithout loss of generality we can assume that (otherwise consider the transpose of athen aa is positive semidefinite hermitian matrixbecause haavvi vaav kavk for all henceaa has non-negative real eigenvaluesl
15,144
ln by theorem the matrix [ vn of right-eigenvectors is unitary matrix define the -th singular value as si li and suppose lr are all greater than and lr+ ln in particularavi for let ui avi /si thenfor ij rhui * ui * aavi si singular value li { { jsi we can extend ur to an orthonormal basis { um of cm ( using the gramschmidt procedurelet [ un be the corresponding unitary matrix defining to be the diagonal matrix with diagonal ( sr )we haveus [av avr avand hence usvnote that aausvvsuussuand aa vsuusvvssvsou is unitary matrix whose columns are eigenvectors of aaand is unitary matrix whose columns are eigenvectors of aa the svd makes it possible to write the matrix as sum of rank- matricesweighted by the singular values {si } um sr si ui * ( = vn which is called the dyade or spectral representation of for real-valued matricesthe svd has nice geometric interpretationillustrated in figure the linear mapping defined by matrix can be thought of as succession of three linear operations( an orthogonal transformation ( rotation with possible flipping of some axes)corresponding to matrix vfollowed by ( simple scaling of the unit vectorscorresponding to sfollowed by ( another orthogonal transformationcorresponding to - - - - - - - - - - - - figure the figure shows how the unit circle and unit vectors (first panelare first rotated (second panel)then scaled (third panel)and finally rotated and flipped spectral representation
15,145
example (ellipseswe continue example using the svd method of the module numpy linalgwe obtain the following svd matrices for matrix - - usand - - figure shows the columns of the matrix us as the two principal axes of the ellipse that is obtained by applying matrix to the points of the unit circle practical method to compute the pseudo-inverse of real-valued matrix is via the singular value decomposition usvwhere is the diagonal matrix collecting all the positive singular valuessay sr as in theorem in this caseavsuwhere sis the diagonal (pseudo-inversematrix- - we conclude with typical application of the pseudo-inverse for least-squares optimization problem from data science example (rank-deficient least squaresgiven is an data matrix xn xn xnp it is assumed that the matrix is of full row rank (all rows of are linearly independentand that the number of rows is less than the number of columnsn under this settingany solution to the equation xb provides perfect fit to the data and minimizes (to the least-squares problem argmin kxb yk ( rp in particularif bminimizes kxb yk then so does bu for all in the null space nx :{ xu }which has dimension to cope with the non-uniqueness of solutionsa possible approach is to solve instead the following optimization problemminimize bb br subject to xb that iswe are interested in solution with the smallest squared norm (orequivalentlythe smallest normthe solution can be obtained via lagrange' method (see section specificallyset (blbb (xb )and solve (bl xl (
15,146
and (blxb ( from ( we get xl/ by substituting it in ( )we arrive at (xx)- yand hence is given by bxl (xx)- (xx)- xy an example python code is given below svdexample py from numpy import diag zeros vstack from numpy random import rand seed from numpy linalg import svd pinv seed ( rand( ,py rand( , , ,vt svd(xsi diag ( /scompute pseudo inverse pseudo_inv vt vstack ((si zeros (( - , ))) pseudo_inv # pinv(xy remove comment for the built -in pseudo inverse print ( [[ - [ - [ - [ - [ - ] solving structured matrix equations for general matrix cnxn performing matrix-vector multiplications takes ( operationsand solving linear systems ax band carrying out lu decompositions takes ( operations howeverwhen is sparse ( has relatively few non-zero elementsor has special structurethe computational complexity for these operations can often be reduced matrices that are "structuredin this way often satisfy sylvester equationof the form am* * sylvester equation ( where mi cnxn are sparse matrices and gi cnxr are matrices of rank the elements of must be easy to recover from these matricese with ( operations typical example is (squaretoeplitz matrixwhich has the following structuretoeplitz matrix
15,147
- -( - -( - - -( - an- - an- an- general square toeplitz matrix is completely determined by the elements along its first row and column if is also hermitian ( aa)then clearly it is determined by only elements if we define the matrices ** ** and - ** ** then ( is satisfied with -( - - - - - -( - : -( - an- - an- - an- - -( - -( - -( - an- - which has rank convolution example (discrete convolution of vectorsthe convolution of two vectors can be represented as multiplication of one of the vectors by toeplitz matrix suppose [ an ]and [ bn ]are two complex-valued vectors thentheir convolution is defined as the vector with -th element [ ] ak bi- + nk= where : for it is easy to verify that the convolution can be written as ab
15,148
wheredenoting the -dimensional column vector of zeros by we have that - - - - clearlythe matrix is (sparsetoeplitz matrix circulant matrix is special toeplitz matrix which is obtained from vector by circularly permuting its indices as followsc cn- cn- ( cn- cn- cn- cn- note that is completely determined by the elements of its first columnc to illustrate how structured matrices allow for faster matrix computationsconsider solving the linear systeman xn an for xn [ xn ]where an [ an ]and an- an- an- an : - an- an- ( is real-valued symmetric positive-definite toeplitz matrix (so that it is invertiblenote that the entries of an are completely determined by the right-hand side of the linear equationvector an as we shall see shortly in example the solution to the more general linear equation an xn bn where bn is arbitrarycan be efficiently computed using the solution to this specific system an xn an obtained via special recursive algorithm (algorithm belowfor every the toeplitz matrix ak satisfies ak pk ak pk where pk is permutation matrix that "flipsthe order of elements -rows when premultiplying and columns when post-multiplying for example where circulant matrix
15,149
clearlypk > and pk pk ik holdso that in fact pk is an orthogonal matrix we can solve the linear system an xn an in ( time recursivelyas follows assume that we have somehow solved for the upper block ak xk ak and now we wish to solve for the ( ( block#ak pk ak ak ak+ xk+ ak+ = > pk ak+ thereforea ak+ > pk ak ak pk ak - since - pk pk ak the second equation above simplifies to - - ak ak pk ak xk pk xk substituting xk pk xk into ak+ > pk and solving for yieldsaak+ > pk xk > xk finallywith the value of computed abovewe have xk pk xk xk+ levinsondurbin this gives the following levinson-durbin recursive algorithm for solving an xn an algorithm levinson-durbin recursion for solving an xn an inputfirst row [ an- [ > - of matrix an outputsolution xn - an for do bk > xk [xk, xk, - xk, ] (ak+ > )/bk xk xk+ return xn in the algorithm abovewe have identified xk [xk, xk, xk, ]the advantage of the levinson-durbin algorithm is that its running cost is ( )instead of the usual ( using the {xk bk computed in algorithm we construct the following lower triangular matrix recursivelysetting and lk lk+ ( (pk xk )
15,150
thenwe have the following factorization of an theorem diagonalization of toeplitz correlation matrix an for real-valued symmetric positive-definite toeplitz matrix an of the form ( )we have ln an > dn where ln is the lower diagonal matrix ( and dn :diag( bn- is diagonal matrix proofwe give proof by induction obviouslyl > is true nextassume that the factorization lk ak > dk holds for given observe that #lk ak pk ak lk ak lk pk ak lk+ ak+ (pk xk ) > pk (pk xk )ak > pk (pk xk )pk ak it is straightforward to verify that [(pk xk )ak > pk (pk xk )pk ak [ > bk ]yielding the recursion lk ak lk pk ak lk+ ak+ > bk secondlyobserve that #lk ak > -lk ak pk xk lk pk ak lk ak lk pk ak > -pk xk lk+ ak+ lk+ > bk > > bk by noting that ak pk xk pk pk ak pk xk pk ak xk pk ak we obtainlk ak > lk+ ak+ lk+ > bk hencethe result follows by induction example (solving an xn bn in ( timeone application of the factorization in theorem is in the fast solution of linear system an xn bn where the righthand side is an arbitrary vector bn since the solution xn can be written as - xn - bn ln dn ln bn we can compute xn in ( timeas follows algorithm solving an xn bn for general right-hand side inputfirst row [ > - of matrix an and right-hand side bn outputsolution xn - bn compute ln in ( and the numbers bn- via algorithm [ xn ln bn (computed in ( time xi xi /bi- for (computed in (ntime [ xn [ xn ln (computed in ( time return xn [ xn
15,151
note that it is possible to avoid the explicit construction of the lower triangular matrix in ( via the following modification of algorithm which only stores an extra vector at each recursive step of the levinson-durbin algorithm algorithm solving an xn bn with (nmemory cost inputfirst row [ > - of matrix an and right-hand side bn outputsolution xn - bn for do [xk xk- [yk yk- > (bk+ > )/ ay (ak+ > )/ [ xa ] [ ay yay ] return function space functional analysis much of the previous theory on euclidean vector spaces can be generalized to vector spaces of functions every element of (real-valuedfunction space is function from some set to rand elements can be added and scalar multiplied as if they were vectors in other wordsif and hthen bg for all ab on we can impose an inner product as mapping ** from to that satisfies ha gi ah gi bh gi fgi hgf ff ff if and only if (the zero functionnorm complete we focus on real-valued function spacesalthough the theory for complex-valued function spaces is similar (and sometimes easier)under suitable modifications ( fgi hgf isimilar to the linear algebra setting in section we say that two elements and in are orthogonal to each other with respect to this inner product if fgi given an inner productwe can measure distances between elements of the function space using the norm : ff for examplethe distance between two functions fm and fn is given by fm fn the space is said to be complete if every sequence of functions for which fm fn as mn (
15,152
converges to some hthat isk fn as sequence that satisfies ( is called cauchy sequence complete inner product space is called hilbert space the most fundamental hilbert space of functions is the space an in-depth introduction to requires some measure theory [ for our purposesit suffices to assume that rd and that on measure is defined which assigns to each suitable set positive number ( ( its volumein many cases of interest is of the form (aw(xdx ( cauchy sequence hilbert space measure where is positive function on which is called the density of with respect to the lebesgue measure (the natural volume measure on rd we write (dxw(xdx to indicate that has density another important case is where (aw( )( density azd where is again called the density of ubut now with respect to the counting measure on zd (which counts the points of zd integrals with respect to measures in ( and ( can now be defined as (xu(dxf (xw(xdxand (xu(dxx (xw( ) respectively we assume for simplicity that has the form ( for measures of the form ( (so-called discrete measures)replace integrals by sums in what follows definition space let be subset of rd with measure (dxw(xdx the hilbert space (xuis the linear space of functions from to that satisfy ( ) (xdx ( and with inner product fgi if (xg(xw(xdx ( let be hilbert space set of functions fi iis called an orthonormal system not all sets have measure suitable sets are borel setswhich can be thought of as countable unions of rectangles orthonormal system
15,153
the norm of every fi is that ish fi fi for all the fi are orthogonalthat ish fi for orthonormal basis it follows then that the fi are linearly independentthat isthe only linear combination (xthat is equal to fi (xfor all is the one where ai and for an orthonormal system fi is called an orthonormal basis if there is no other that is orthogonal to all the fi (other than the zero functionalthough the general theory allows for uncountable basesin practice the set is taken to be countable example (trigonometric orthonormal basislet be the hilbert space (( ) )where (dxw(xdx and is the constant function ( alternativelytake and the indicator function on ( pthe trigonometric functions ( gk (xcos(kx) hk (xsin(kx) form countable infinite-dimensional orthonormal basis of hilbert space with an orthonormal basis behaves very similarly to the familiar euclidean vector space in particularevery element ( functionf can be written as unique linear combination of the basis vectorsx ffi fi ( fourier expansion exactly as in theorem the right-hand side of ( is called (generalizedfourier expansion of note that such fourier expansion does not require trigonometric basisany orthonormal basis will do example (example (cont )consider the indicator function ( { pas the trigonometric functions {gk and {hk form basis for (( ) dx)we can write (xa ak cos(kxbk sin(kx)( = = rp rp rp where dx / ak cos(kx) dx and bk sin(kx) dxk this means that ak for all kbk for even kand bk /( pfor odd consequently sin(kxf ( ( = figure shows several fourier approximations obtained by truncating the infinite sum in ( the function spaces typically encountered in machine learning and data science are usually separable spaceswhich allows for the set to be considered countableseee [
15,154
figure fourier approximations of the unit step function on the interval ( )truncating the infinite sum in ( to and termsgiving the dotted bluedashed redand solid green curvesrespectively starting from any countable basiswe can use the gram-schmidt procedure to obtain an orthonormal basisas illustrated in the following example example (legendre polynomialstake the function space (rw(xdx)where ( {- we wish to construct an orthonormal basis of polynomial functions starting from the collection of monomialsi where ik using gram-schmidtthe first normalized zero-degree polynomial is /ki / to find ( polynomial of degree )project (the identity functiononto the space spanned by the resulting projection is :hg ig written out as (xz (xdx ( - - dx henceg ( )/ki is linear functionthat isof the form (xax the constant is found by normalization kg - (xdx dx - so / the gram-schmidt procedurewe find (xthat (xcontinuing / ( ) ( / ( xandin general dk gk ( ( ) kdx these are the (normalizedlegendre polynomials the graphs of and are given in figure legendre polynomials
15,155
- - - figure the first normalized legendre polynomials as the legendre polynomials form an orthonormal basis of ( {- dx)they can be used to approximate arbitrary functions in this space for examplefigure shows an approximation using the first legendre polynomials ( of the fourier expansion of the indicator function on the interval (- / / these legendre polynomials form the basis of -dimensional linear subspace onto which the indicator function is orthogonally projected - - figure approximation of the indicator function on the interval (- / / )using the legendre polynomials orthogonal polynomials laguerre polynomials the legendre polynomials were produced in the following waywe started with an unnormalized probability density on -in this case the probability density of the uniform distribution on (- we then constructed sequence of polynomials by applying the gram-schmidt procedure to the monomials xx by using exactly the same procedurebut with different probability densitywe can produce other such orthogonal polynomials for examplethe density of the standard exponential distributionw(xe- gives the laguerre polynomialswhich are defined this can be further generalized to the density of gamma distribution
15,156
by the recurrence ( )gn+ ( ( )gn (xngn- ( ) with ( and ( xfor the hermite polynomials are obtained when using instead the density of the standard normal distributionw(xe- / px these polynomials satisfy the recursion gn+ (xxgn (xdgn (xdx hermite polynomials with ( note that the hermite polynomials as defined above have not been normalized to have norm to normalizeuse the fact that kgn nwe conclude with number of key results in functional analysis the first one is the celebrated cauchy-schwarz inequality cauchyschwarz theorem cauchy-schwarz let be hilbert space for every fg it holds that | fgi kgk proofthe inequality is trivially true for (zero functionfor we can write ag hwhere and fgi/kgk consequentlyk | | kgk khk | | kgk the result follows after rearranging this last inequality let and be two linear vector spaces (for examplehilbert spaceson which norms kv and kw are defined suppose is mapping from to when vsuch mapping is often called an operatorwhen it is called functional mapping is said to be linear if ( bgaaf ba(gin this case we write instead of af if there exists such that ka kw kv ( then is said to be bounded mapping the smallest for which ( holds is called the norm of adenoted by kak (not necessarily linearmapping is said to be continuous at if for any sequence converging to the sequence af )af )converges to af that isif vk gkv kaf ( )kw operator functional linear mapping ( if the above property holds for every vthen the mapping itself is called continuous theorem continuity and boundedness for linear mappings for linear mappingcontinuity and boundedness are equivalent prooflet be linear and bounded we may assume that is non-zero (otherwise the statement holds trivially)and that therefore kak taking /kak in ( now ensures that ka agkw kak gkv kak this shows that is continuous bounded mapping norm continuous mapping
15,157
converselysuppose is continuous in particularit is continuous at (the zeroelement of vthustake and let and be as in ( for any let /( kgkv as khkv / dit follows from ( that kahkw kagkw kgkv rearranging the last inequality gives kagkw /dkgkv showing that is bounded theorem riesz representation theorem any bounded linear functional ph on hilbert space can be represented as ph(hhhgifor some (depending on phprooflet be the projection of onto the nullspace of phthat isn { ph( if ph is not the -functionalthen there exists with ph( let pg then and ph( ph( take /ph( for any hf : ph( ) lies in as it holds that fg which is equivalent to hhg ph(hkg by defining /kg we have found our representation fourier transforms we will now briefly introduce the fourier transform before doing sowe will extend the concept of space of real-valued functions as follows definition space let be subset of rd with measure (dxw(xdx and [ then (xuis the linear space of functions from to that satisfy ( ) (xdx ( when (xuis in fact hilbert space equipped with inner product fgi (xg(xw(xdx ( we are now in position to define the fourier transform (with respect to the lebesgue measurenote that in the following definitions and we have chosen particular convention equivalent (but not identicaldefinitions exist that include scaling constants ( ) or ( )- and where - pt is replaced with pttor -
15,158
definition (multivariatefourier transform the fourier transform of (realor complex-valuedfunction (rd is the function defined as ( : - (xdx rd fourier transform rd the fourier transform is continuousuniformly bounded (since (rd imr plies that ( ) rd ( )dx )and satisfies limktke ( ( result known as the riemann-lebesgue lemmahowevere does not necessarily have finite integral simple example in is the fourier transform of ( {- / / then (tsin(pt)/(ptsinc(pt)which is not absolutely integrable definition (multivariateinverse fourier transform the inverse fourier transform - of (realor complex-valuedfunction ( is the function fv defined as fv( :ei (tdtx rd rd as one would hopeit holds that if and are both in (rd )then - [ ]almost everywhere the fourier transform enjoys many interesting and useful propertiessome of which we list below linearityfor fg (rd and constants ab rf [ bga [ space shifting and scalinglet rdxd be an invertible matrix and rd constant vector let (rd and define ( : (ax bthen - ) [ ](tei ( ( - )/det( )|where -:( )- ( - ) frequency shifting and scalinglet rdxd be an invertible matrix and rd constant vector let (rd and define - ( : - ( - )/det( )then [ ](te (at differentiationlet (rd (rd and let fk : /xk be the partial derivative of with respect to xk if fk (rd for dthen fk ]( ( tk (tinverse fourier transform
15,159
convolutionlet fg (rd be real or complex valued functions their convolutionf gis defined as )(xf (yg( ydyrd and is also in (rd moreoverthe fourier transform satisfies gf [ dualitylet and both be in (rd then [ ]](tf (- product formulalet fg (rd and denote by fe their respective fourier transforms then gf (rd )and (zg(zdz ( ) (zdz rd rd there are many additional properties which hold if (rd (rd in particularif fg (rd (rd )then , (rd and fe gi fgia result often known as parseval' formula putting gives the result often referred to as plancherel' theorem the fourier transform can be extended in several waysin the first instance to functions in (rd by continuity substantial extension of the theory is realized by replacing integration with respect to the lebesgue measure ( rd dxwith integration with respect to (finite borelmeasure ( rd (dx)moreoverthere is close connection between the fourier transform and characteristic functions arising in probability theory indeedif is random vector with pdf then its characteristic function ps satisfies ps( : ei ](- /( ) discrete fourier transform herewe introduce the (univariatediscrete fourier transformwhich can be viewed as special case of the fourier transform introduced in definition where integration is with respect to the counting measureand ( for ( definition discrete fourier transform discrete fourier transform the discrete fourier transform (dftof vector [ xn- ]cn is the vector [ xn- ]whose elements are given by xt - = st where exp(- /nin other wordse is obtained from via the linear transformation fx(
15,160
where - ( - on- ( - ( - ) the matrix is so-called vandermonde matrixand is clearly symmetric ( fmoreoverfn is in fact unitary matrix and hence its inverse is simply its complex conjugate fn thusf- / and we have that the inverse discrete fourier transform (idftis given by - -st ( xt = or in terms of matrices and vectorsx fe / observe that the idft of vector is related to the dft of its complex conjugate ysince / / consequentlyan idft can be computed via dft there is close connection between circulant matrices and the dft to make this connection concretelet be the circulant matrix corresponding to the vector cn and denote by the -th column of the discrete fourier matrix ft thenthe -th element of is - ( -kmod otk - cy ot( -yy= = ots |{zn- cy -ty -th element of = { ls hencethe eigenvalues of are lt cf with corresponding eigenvectors collecting the eigenvalues into the vector [ ln- ]fcwe therefore have the eigen-decomposition diag(lf/ consequentlyone can compute the circular convolution of vector [ an ]and [ cn- ]by series of dfts as follows construct the circulant matrix corresponding to thenthe circular convolution of and is given by ca proceed in four steps compute fa/ compute fc inverse discrete fourier transform
15,161
compute [ zn ln- ] compute steps and are (up to constantsin the form of an idftand step is in the form of dft these are computable via the fft (section in ( ln ntime step is dot product computable in (ntime thusthe circular convolution can be computed with the aid of the fft in ( ln ntime one can also efficiently compute the product of an toeplitz matrix and an vector in ( ln ntime by embedding into circulant matrix of size nx namelydefine cb where tn- tn- -( - -( - - - - - -( - then product of the form ta can be computed in ( ln ntimesince we may write # ta ba the left-hand side is product of circulant matrix with vector of length nand so can be computed in ( ln ntime via the fftas previously discussed conceptuallyone can also solve equations of the form cx for given vector cn and circulant matrix (corresponding to cn assuming all its eigenvalues are non-zerovia the following four steps compute fb/ compute fc compute / [ / zn /ln- ] compute fp once againsteps and are (up to constantsin the form of an idftand step is in the form of dftall of which are computable via the fft in ( ln ntimeand step is computable in (ntimemeaning the solution can be computed using the fft in ( ln ntime fast fourier transform fast fourier transform the fast fourier transform (fftis numerical algorithm for the fast evaluation of ( and ( by using divide-and-conquer strategythe algorithm reduces the computational complexity from ( (for the naive evaluation of the linear transformationto ( ln [
15,162
the essence of the algorithm lies in the following observation suppose then one can express any index appearing in ( via pair ( )with where { and { similarlyone can express any index appearing in ( via pair ( )with where { and { identifying xt xt , and , we may re-express ( as xt , - - = , = ( observe that (because or )so that the inner sum over depends only on and define yt , : - , = computing each yt , requires ( operations in terms of the {yt , }( can be written as xt , - yt , = requiring ( operations to compute thuscalculating the dft using this two-step procedure requires ( ( )operationsrather than ( now supposing rm repeated application the above divide-and-conquer idea yields an -step procedure requiring ( ( rm )operations in particularif rk for all mwe have that rm and logr nso that the total number of operations is ( mo( logr ( )typicallythe radix is small (not necessarily primenumberfor instance further reading good reference book on matrix computations is golub and van loan [ useful list of many common vector and matrix calculus identities can be found in [ strang' introduction to linear algebra [ is classic textbookand his recent book [ combines linear algebra with the foundations of deep learning fast reliable algorithms for matrices with structure can be found in [ kolmogorov and fomin' masterpiece on the theory of functions and functional analysis [ still provides one of the best introductions to the topic popular choice for an advanced course in functional analysis is rudin [
15,163
ultivariate ifferentiation and ptimization the purpose of this appendix is to review various aspects of multivariate differentiation and optimization we assume the reader is familiar with differentiating realvalued function multivariate differentiation for multivariate function that maps vector [ xn ]to real number ( ) is the derivative taken with respect the partial derivative with respect to xi denoted to xi while all other variables are held constant we can write all the partial derivatives neatly using the "scalar/vectorderivative notationf :scalar/vector( partial derivative xn this vector of partial derivatives is known as the gradient of at and is sometimes written as (xnextsuppose that is multivalued (vector-valuedfunction taking values in rm defined by (xx (xx = (xxn fm (xwe can compute each of the partial derivatives fi / and organize them neatly in "vector/vectorderivative notationf xfm xfm ( vector/vector: xfmn xn xn gradient
15,164
matrix of jacobi the transpose of this matrix is known as the matrix of jacobi of at (sometimes called the frechet derivative of at )that isf # ( :( fm fm fm xn if we define ( : (xand take the "vector/vectorderivative of with respect to xwe obtain the matrix of second-order partial derivatives of xm xm ( :( xm xm xm hessian matrix which is known as the hessian matrix of at xalso denoted as (xif these secondf xj andhenceorder partial derivatives are continuous in region around xthen xi the hessian matrix (xis symmetric finallynote that we can also define "scalar/matrixderivative of with respect to rmxn with (ij)-th entry xi xy xy : xymn xm xm and "matrix/scalarderivativex : xm xm ******** xmn example (scalar/matrix derivativelet axbwhere rmxn rm and rn since is scalarwe can write tr(ytr(xba)using the cyclic property of the trace (see theorem defining :bawe have ym [xc]ii = = = so that /xi ji orin matrix formy cabx xi ji
15,165
example (scalar/matrix derivative via the woodbury identitylet tr - where xa rnxn we now prove that - -ax- to show thisapply the woodbury matrix identity to an infinitesimal perturbationx euof xand take to obtain the following( )- - - - ( - )- - -- - - thereforeas tr ( )- tr - --tr - - -tr - ax- nowsuppose that is an all zero matrix with one in the (ij)-th position we can write- - ( ua tr tr lim -tr ux- ax- - ax- ji | xi  - ax- thereforex the following two examples specify multivariate derivatives for the important special cases of linear and quadratic functions example (gradient of linear functionlet (xax for some mxn constant matrix thenits vector/vector derivative ( is the matrix ax ( to see thislet ai denote the (ij)-th element of aso that pn = xk (xax pn = amk xk to find the ji)-th element of xf we differentiate the -th element of with respect to fi aik xk ai = in other wordsthe (ij)-th element of xf is ji the (ij)-th element of aexample (gradient and hessian of quadratic functionlet (xxax for some constant matrix thenf ( ( ) (
15,166
it follows immediately that if is symmetricthat isa athen (xax ax and (xax to prove ( )first observe that (xxax ni= nj= ai xi which is quadratic form in xis real-valuedwith xx aik xi ak ai xi xk xk = = = = the first term on the right-hand side is equal to the -th element of axwhereas the second term equals the -th element of xaor equivalently the -th element of ax taylor expansion the matrix of jacobi and the hessian matrix feature prominently in multidimensional taylor expansions theorem multidimensional taylor expansions let be an open subset of rn and let if is continuously twice differentiable function with jacobian matrix (xand hessian matrix ( )then for every we have the following firstand second-order taylor expansionsf (xf (aj ( ( ao(kx ak ( and (xf (aj ( ( ( ) ( ( ao(kx ak ( as kx ak by dropping the remainder termsone obtains the corresponding taylor approximations the result is essentially saying that smooth enough function behaves locally (in the neighborhood of point xlike linear and quadratic function thusthe gradient or hessian of an approximating linear or quadratic function is basic building block of many approximation and optimization algorithms remark (version without remainder termsan alternative version of taylor' theorem states that there exists an that lies on the line segment between and such that ( and ( hold without remainder termswith (ain ( replaced by ( and (ain ( replaced by ( composition chain rule consider the functions rk rm and rm rn the function gf ( )is called the composition of and written as and is function from rk to rn suppose (xand ( )as in figure let (xand (ybe the (frechetderivatives of (at xand (at )respectively we may think of (xas the matrix that describes
15,167
howin neighborhood of xthe function can be approximated by linear functionf ( hf (xj ( )hand similarly for (ythe well-known chain rule of calculus simply states that the derivative of the composition is the matrix product of the derivatives of and that ischain rule gf (xj (yj (xgf rn figure function composition the blue arrows symbolize the linear mappings in terms of our vector/vector derivative notationwe have ### ormore simplyz ( in similar way we can establish scalar/matrix chain rule in particularsuppose is an matrixwhich is mapped to :xa for fixed -dimensional vector in turny is mapped to scalar : (yfor some function denote the columns of by thenp xa jj= andthereforey/ in it follows by the chain rule ( that in ai xi xi thereforei xzp ax ( example (derivative of the log-determinantsuppose we are given positive definite matrix pxp and wish to compute the scalar/matrix derivative lna|athe result is ln |aa- to see thiswe can reason as follows by theorem we can write qwhere
15,168
is an orthogonal matrix and diag( is the diagonal matrix of eigenvalues of the eigenvalues are strictly positivesince is positive definite denoting the columns of by (qi )we have li > aqi tr qi aq> ( from the properties of determinantswe have :ln |aln | qln(| | | |pp ln |di= ln li we can thus write ln |ax ln li li ln li li li li = = = where the second equation follows from the chain rule applied to the function composition li from ( and example we have li / qi > it follows that qq - qa- = li objective function optimization theory optimization is concerned with finding minimal or maximal solutions of real-valued objective function in some set xmin (xor xx local minimizer global minimizer max (xxx ( since any maximization problem can easily be converted into minimization problem via the equivalence max (xmin ( )we focus only on minimization problems we use the following terminology local minimizer of (xis an element xx such that ( (xfor all in some neighborhood of xif ( (xfor all xthen xis called global minimizer or global solution the set of global minimizers is denoted by argmin (xxx local/global minimum the function value (xcorresponding to local/global minimizer xis referred to as the local/global minimum of (xoptimization problems may be classified by the set and the objective function if is countablethe optimization problem is called discrete or combinatorial if instead is nondenumerable set such as rn and takes values in nondenumerable setthen the problem is said to be continuous optimization problems that are neither discrete nor continuous are said to be mixed the search set is often defined by means of constraints standard setting for constrained optimization (minimizationis the followingmin (xxy subject tohi ( gi ( mi (
15,169
heref is the objective functionand {gi and {hi are given functions so that hi ( and gi ( represent the equality and inequality constraintsrespectively the region where the objective function is defined and where all the constraints are satisfied is called the feasible region an optimization problem without constraints is said to be an unconstrained problem for an unconstrained continuous optimization problemthe search space is often taken to be ( subset ofrn and is assumed to be function for sufficiently high (typically or suffices)that isits -th order derivative is continuous for function the standard approach to minimizing (xis to solve the equation ( ( where (xis the gradient of at the solutions xto ( are called stationary points stationary points can be local/global minimizerslocal/global maximizersor saddle points (which are neitherifin additionthe function is the condition ( ( ) feasible region for all stationary points saddle points ( ensures that the stationary point xis local minimizer of the condition ( states that the hessian matrix of at xis positive definite recall that we write to indicate that matrix is positive definite in figure we have multiextremal objective function on there are four stationary pointstwo are local minimizersone is local maximizerand one is neither minimizer nor maximizerbut saddle point local maximum (xsaddle point local minimum global minimum figure multiextremal objective function in one dimension convexity and optimization an important class of optimization problems is related to the notion of convexity set is said to be convex if for all it holds that ( ax for all in additionthe objective function is convex function provided that for each in the interior of there exists vector such that (yf ( ( )vy ( convex function
15,170
subgradient the vector in ( may not be unique and is referred to as subgradient of one of the crucial properties of convex function is that jensen' inequality holds (see exercise in ) (xf (ex)for any random vector directional derivative example (convexity and directional derivativethe directional derivative of multivariate function at in the direction is defined as the right derivative of ( : ( dat ( df (xlim ( /tf ( )tt| lim this right derivative may not always exist howeverif is convex functionthen the directional derivative of at in the interior of its domain always exists (in any direction dto see thislet by jensen' inequality we have for any and in the interior of the domaint ( (xf making the substitution and rearranging the last equation yieldsf ( df (xf ( df (xt in other wordsthe function ( df ( ))/ is increasing for and therefore the directional derivative satisfiesf ( df (xf ( df (xinf lim > | henceto show existence it is enough to show that ( df ( ))/ is bounded from below since lies in the interior of the domain of we can choose small enough so that also lies in the interior thereforethe convexity of implies that there exists subgradient vector such that ( df (xv( din other wordsf ( df (xvd provides lower bound for all and the directional derivative of at an interior always exists (in any directionconcave function function satisfying ( with strict inequality is said to be strictly convex it is said to be (strictlyconcave function if is (strictlyconvex assuming that is an open setconvexity for is equivalent to (yf ( ( ) ( for all xy moreoverfor strict convexity is equivalent to the hessian matrix being positive definite for all xand convexity isequivalent to the hessian matrix being positive semidefinite for all xthat isy (xy for all and recall that we write to indicate that matrix is positive semidefinite
15,171
example (convexity and differentiabilityif is continuously differentiable multivariate functionthen is convex if and only if the univariate function ( : ( ) [ is convex function for any and in the interior of the domain of this property provides an alternative definition for convexity of multivariate and differentiable function to see why it is truefirst assume that is convex and [ thenusing the subgradient definition of convexity in ( )we have (af ( ( ) for some subgradient substituting with and dwe obtain ( ( ( vd for any two points [ thereforeg is convexbecause we have identified the existence of subgradient vd for each converselyassume that is convex for [ since is differentiablethen so is thenthe convexity of implies that there is subgradient at such thatg(tg( for all [ rearrangingg(tg( vt and taking the right limit as we obtain ( df (xthereforeg(tg( ( df (xand substituting yieldsf ( df (xdf ( )so that there exists subgradient vectornamely ( )for each hencef is convex by the definition in ( an optimization program of the form ( is said to be convex programming problem if the objective is convex function the inequality constraint functions {gi are convex the equality constraint functions {hi are affinethat isof the form > bi this is equivalent to both hi and -hi being convex for all table summarizes some commonly encountered problemsall of which are convexwith the exception of the quadratic programs with convex programming problem
15,172
table some common classes of optimization problems name (xconstraints linear program (lpcx ax and inequality form lp cx ax quadratic program (qp ax bx dx dex convex qp ax bx dx dex convex program (xconvex {gi ( )convex{hi ( )of the form > bi ( recognizing convex optimization problems or those that can be transformed to convex optimization problems can be challenging howeveronce formulated as convex optimization problemsthese can be efficiently solved using subgradient [ ]bundle [ ]and cutting-plane methods [ lagrange function lagrangian method the main components of the lagrangian method are the lagrange multipliers and the lagrange function the method was developed by lagrange in for the optimization problem ( with only equality constraints in kuhn and tucker extended lagrange' method to inequality constraints given an optimization problem ( containing only equality constraints hi ( mthe lagrange function is defined as (xbf (xlagrange multipliers bi hi ( ) = where the coefficients {bi are called the lagrange multipliers necessary condition for point xto be local minimizer of (xsubject to the equality constraints hi ( mis (xb (xb lagrangian for some value bthe above conditions are also sufficient if (xbis convex function of given the original optimization problem ( )containing both the equality and inequality constraintsthe generalized lagrange functionor lagrangianis defined as (xabf (xk = ai gi (xm = bi hi (
15,173
theorem karush-kuhn-tucker (kktconditions necessary condition for point xto be local minimizer of (xin the optimization problem ( is the existence of an aand bsuch that (xab (xab gi ( * * gi ( ki ki for convex programs we have the following important results [ ] every local solution xto convex programming problem is global solution and the set of global solutions is convex ifin additionthe objective function is strictly convexthen any global solution is unique for strictly convex programming problem with objective and constraint functionsthe kkt conditions are necessary and sufficient for unique global solution duality the aim of duality is to provide an alternative formulation of an optimization problem which is often more computationally efficient or has some theoretical significance (see [ page ]the original problem ( is referred to as the primal problem whereas the reformulated problembased on lagrange multipliersis called the dual problem duality theory is most relevant to convex optimization problems it is well known that if the primal optimization problem is (strictlyconvex then the dual problem is (strictlyconcave and has (uniquesolution from which the (uniqueoptimal primal solution can be deduced the lagrange dual program (also called the wolfe dualof the primal program ( )ismax (aba, subject toa where lis the lagrange dual functionl(abinf (xab)xx ( giving the greatest lower bound (infimumof (xabover all possible it is not difficult to see that if is the minimal value of the primal problemthen (ab for any and any this property is called weak duality the lagrangian dual program thus determines the best lower bound on if dis the optimal value for the dual problem then the difference dis called the duality gap the duality gap is extremely useful for providing lower bounds for the solutions of primal problems that may be impossible to solve directly it is important to note that for primal dual lagrange dual program
15,174
strong duality linearly constrained problemsif the primal is infeasible (does not have solution satisfying the constraints)then the dual is either infeasible or unbounded converselyif the dual is infeasible then the primal has no solution of crucial importance is the strong duality theoremwhich states that for convex programs ( with linear constrained functions hi and gi the duality gap is zeroand any xand (absatisfying the kkt conditions are (globalsolutions to the primal and dual programsrespectively in particularthis holds for linear and convex quadratic programs (note that not all quadratic programs are convexfor convex primal program with objective and constraint functionsthe lagrangian dual function ( can be obtained by simply setting the gradient (with respect to xof the lagrangian (xabto zero one can further simplify the dual program by substituting into the lagrangian the relations between the variables thus obtained furtherfor convex primal problemif there is strictly feasible point (that isa feasible point satisfying all of the inequality constraints with strict inequality)then the duality gap is zeroand strong duality holds this is known as slater' condition [ page the lagrange dual problem is an important example of saddle-point problem or minimax problem in such problems the aim is to locate point (xyx that satisfies sup inf (xyinf (xyf (xysup (xyinf sup (xyyy xx the equation xx yy sup inf (xyinf sup (xyyy xx minimax xx yy xx yy is known as the minimax equality other problems that fall into this framework are zerosum games in game theorysee also [ for number of combinatorial optimization problems that can be viewed as minimax problems numerical root-finding and minimization in order to minimize function rn one may solve ( -norm which gives stationary point of as consequenceany technique for root-finding can be transformed into an unconstrained optimization method by attempting to locate roots of the gradient howeveras noted in section not all stationary points are minimaand so additional information (such as is contained in the hessianif is needs to be considered in order to establish the type of stationary point alternativelya root of continuous function rn rn may be found by minimizing the norm of (xover all xthat isby solving min ( )with ( :kg( ) where for the -norm of [ yn ]is defined as kyk :  = |yi  / henceany (un)constrained optimization method can be transformed into technique for locating the roots of function
15,175
starting with an initial guess most minimization and root-finding algorithms create sequence using the iterative updating rulext+ xt at dt ( where at is (typically smallstep sizecalled the learning rateand the vector dt is the search direction at step the iteration ( continues until the sequence {xt is deemed to have converged to solutionor computational budget has been exhausted the performance of all such iterative methods depends crucially on the quality of the initial guess there are two broad categories of iterative optimization algorithms of the form ( )those of line search typewhere at iteration we first compute direction dt and then determine reasonable step size at along this direction for examplein the case of minimizationat may be chosen to approximately minimize (xt dt for fixed xt and dt those of trust region typewhere at each iteration we first determine suitable step size at and then compute an approximately optimal direction dt learning rate line search trust region in the following sectionswe review several widely-used root-finding and optimization algorithms of the line search type newton-like methods suppose we wish to find roots of function rn rn if is in we can approximate around point xt as (xf (xt (xt )( xt )where is the matrix of jacobi -the matrix of partial derivatives of see ( when (xt is invertiblethis linear approximation has root xt - (xt (xt this gives the iterative updating formula ( for finding roots of with direction dt - - (xt (xt and learning rate at this is known as newton' method (or the newton-raphson methodfor root-finding instead of unit learning ratesometimes it is more effective to use an at that satisfies the armijo inexact line search conditionk (xt at dt ) ( at (xt )kwhere is small heuristically chosen constantsay - for functionssuch an at always exists by continuity and can be computed as in the following algorithm newton' method armijo inexact line search
15,176
algorithm newton-raphson for finding roots of ( inputan initial guess and stopping error outputthe approximate root of ( while ( ) and budget is not exhausted do solve the linear system (xd ( - while ( ) ( - ak ( ) do / xx+ad return we can adapt root-finding newton-like method in order to minimize differentiable function rn we simply try to locate zero of the gradient of when is functionthe function rn rn is continuousand so the root of leads to the search direction dt - - ( (xt )where ht is the hessian matrix at xt (the matrix of jacobi of the gradient is the hessianwhen the learning rate at is equal to the update xt - (xt can alternatively be derived by assuming that (xis approximately quadratic and convex in the neighborhood of xt that is (xf (xt ( xt ) (xt ( xt )ht ( xt ) ( and then minimizing the right-hand side of ( with respect to the following algorithm uses an armijo inexact line search for minimization and guards against the possibility that the hessian may not be positive definite (that isits cholesky decomposition does not existalgorithm newton-raphson for minimizing (xinputan initial guess xstopping error line search parameter ( outputan approximate minimizer of ( in (the identity matrix while ( ) and budget is not exhausted do compute the hessian at if then /cholesky is successful update to be the cholesky factor satisfying llh else do not update the lower triangular - - ( (computed by forward substitutiond - (computed by backward substitutiona- while ( adf (xa - ( ) do -axx xx+ad return
15,177
downside with all newton-like methods is that at each step they require the calculation and inversion of an hessian matrixwhich has computing time of ( )and is thus infeasible for large one way to avoid this cost is to use quasi-newton methodsdescribed next quasi-newton methods the idea behind quasi-newton methods is to replace the inverse hessian in ( at iteration by an matrix satisfying the secant conditionc dsecant condition ( where xt xt- and (xt (xt- are vectors stored in memory at each iteration the secant condition is satisfiedfor exampleby the broyden' family of matricesa ( uu for some and since there is an infinite number of matrices that satisfy the condition ( )we need way to determine unique at each iteration such that computing and storing from one step to the next is fast and avoids any costly matrix inversion the following examples illustrate howstarting with an initial guess at such matrix can be efficiently updated from one iteration to the next example (low-rank hessian updatethe quadratic model ( can be strengthened by further assuming that exp( ( )is proportional to probability density that can be approximated in the neighborhood of xt by the pdf of the (xt+ - distribution this normal approximation allows us to measure the discrepancy between two pairs ( and ( using the kullback-leibler divergence between the pdfs of the - ( - and ( distributions (see exercise on page )- ( - : - tr( - ln | ( ( ( suppose that the latest approximation to the inverse hessian is and we wish to compute an updated approximation for step one approach is to find the symmetric matrix that minimizes its kullback-leibler discrepancy from cas defined abovesubject to the constraint ( in other wordsmin ( aa subject toa da athe solution to this constrained optimization (see exercise on page yields the broyden-fletcher-goldfarb-shanno or bfgs formula for updating the matrix from one iteration to the nextcbfgs dd ( ) ( { bfgs update ( bfgs formula
15,178
in practical implementationwe keep single copy of in memory and apply the bfgs update to it at every iteration note that if the current is symmetricthen so is the updated matrix moreoverthe bfgs update is matrix of rank two since the kullback-leibler divergence is not symmetricit is possible to flip the roles of and in ( and instead solve min ( ca subject toa da adfp formula the solution (see exercise on page gives the davidon-fletcher-powell or dfp formula for updating the matrix from one iteration to the nextcdfp ddc { ( dfp update note that if the curvature condition holds and the current is symmetric positive definitethen so is its update example (diagonal hessian updatethe original bfgs formula requires ( storage and computationwhich may be unmanageable for large one way to circumvent the prohibitive quadratic cost is to only store and update diagonal hessian matrix from one iteration to the next if is diagonalthen we may not be able to satisfy the secant condition ( and maintain positive definiteness instead the secant condition ( can be relaxed to the set of inequalities - dwhich are related to the definition of subgradient for convex functions we can then find unique diagonal matrix by minimizing (xt xt+ awith respect to and subject to the constraints that and is diagonal the solution (exercise on page yields the updating formula for diagonal element ci of ci ci if di / ci ui ci ui ( ci / otherwisei where : (xt and we assume unit learning ratext+ xt au steepest descent example (scalar hessian updateif the identity matrix is used in place of the hessian in ( )one obtains steepest descent or gradient descent methodsin which the iteration ( reduces to xt+ xt at (xt the rationale for the name steepest descent is as follows if we start from any point and make an infinitesimal move in some directionthen the function value is reduced by the largest magnitude in the (unit normdirectionu: ( )/ ( ) this is seen from the following inequality for all unit vectors (that iskuk ) ( ( ut= = dt dt observe that equality is achieved if and only if uthis inequality is an easy consequence of the cauchy-schwarz inequality
15,179
|{zcauchy-schwartz kuk uthe steepest descent iterationxt+ xt at (xt )still requires suitable choice of the learning rate at an alternative way to think about the iteration is to assume that the learning rate is always unityand that at each iteration we use an inverse hessian matrix of the form at for some positive constant at satisfying the secant condition ( with matrix of the form ai is not possible howeverit is possible to choose so that the secant condition ( is satisfied in the direction of (or alternatively dthis gives the barzilai-borwein formulas for the learning rate at iteration tkdk or alternatively at ( at normal approximation method let phht- ( xt+ denote the pdf of the (xt+ - distribution as we already saw in example the quadratic approximation ( of in the neighborhood of xt is equivalent (up to constantto the minus of the logarithm of the pdf phh- ( xt+ in other wordst we use phh- ( as simple model for the density + exp( ( )exp( ( )dy one consequence of the normal approximation is that for in the neighborhood of xt+ we can writef (xln phh- ( xt+ -ht ( xt+ in other wordsusing the fact that > ht ( ) ( )]ht ( xt+ )( xt+ )ht and taking expectations on both sides with respect to (xt+ - givese (xf ( )]ht this suggests thatgiven the gradient vectors computed in the past (where stands for historyof newton iterationsui : (xi ) ( )tthe hessian matrix ht can be approximated via the average ui > = - + shortcoming of this approximation is thatunless is large enoughthe hessian approxp imation ti= - + ui > may not be full rank and hence not invertible to ensure that the barzilaiborwein formulas
15,180
hessian approximation is invertiblewe add suitable diagonal matrix to obtain the regularized version of the approximationt ui > ht = - + with this full-rank approximation of the hessianthe newton search direction in ( becomes- dt ui > ut ( = - + thusdt can be computed in ( ntime via the sherman-morrison algorithm further to thisthe search direction ( can be efficiently updated to the next one- + ui > ut+ dt+ = - + in ( ntimethus avoiding the usual ( ncost (see exercise on page nonlinear least squares consider the squared-error training loss in nonlinear regressionn ` ( ( ) ( (xi byi ) = where (bis nonlinear prediction function that depends on the parameter (for example( shows the nonlinear logistic prediction functionthe training loss can be written as kg( byk where ( :[ ( ) (xn )]is the vector of outputs we wish to minimize the training loss in terms of in the newton-like methods in section one derives an iterative minimization algorithm that is inspired by taylor expansion of ` ( ( )insteadgiven current guess bt we can consider the taylor expansion of the nonlinear prediction function gg( bg( bt gt ( bt ) where gt : (bt is the matrix of jacobi of ( bat bt denoting the residual et : ( bt and replacing ( bwith its taylor approximation in ` ( ( ))we obtain the approximation to the training loss in the neighborhood of bt ` ( ( ) gauss-newton gt ( bt et the minimization of the right-hand side is linear least-squares problem and therefore dt : bt satisfies the normal equationsg> gt dt > (-et assuming that > gt is invertiblethe normal equations yield the gauss-newton search directiondt -( > gt )- > et
15,181
unlike the search direction ( for newton-like algorithmsthe search direction of gauss-newton algorithm does not require the computation of hessian matrix observe that in the gauss-newton approach we determine dt by viewing the search direction as coefficients in linear regression with feature matrix gt and response -et this suggests that instead of using linear regressionwe can compute dt via ridge regression with suitable choice for the regularization parameter dt -( > gt ngi )- > et if we replace ni with the diagonal matrix diag( > gt )we then obtain the levenbergmarquardt search directiondt -( > gt diag( > gt ))- > et ( recall that the ridge regularization parameter has the following effect on the least-squares solutionwhen it is zerothen the solution dt coincides with the search direction of the gauss-newton methodand when tends to infinitythen kdt tends to zero thusg controls both the magnitude and direction of vector dt simple version of the levenbergmarquardt algorithm is the following algorithm levenberg-marquardt for minimizing kg( byk inputan initial guess stopping error training set outputan approximate minimizer of kg( byk and (or another default value while stopping condition is not met do compute the search direction dt via ( et+ ( bt dt if ket+ ket then / et+ et bt+ bt dt else - + return bt constrained minimization via penalty functions constrained optimization problem of the form ( can sometimes be reformulated as simpler unconstrained problem -for examplethe unconstrained set can be transformed to the feasible region of the constrained problem via function ph rn rn such that ph(ythen( is equivalent to the minimization problem min (ph( ))yy in the sense that solution xof the original problem is obtained from transformed solution yvia xph(ytable lists some examples of possible transformations levenbergmarquardt
15,182
table some transformations to eliminate constraints penalty functions constrained unconstrained > > exp(yy ( asin (yunfortunatelyan unconstrained minimization method used in combination with these transformations is rarely effective insteadit is more common to use penalty functions the overarching idea of penalty functions is to transform constrained problem into an unconstrained problem by adding weighted constraint-violation terms to the original objective functionwith the premise that the new problem has solution that is identical or close to the original one for exampleif there are only equality constraintsthen ai |hi ( ) ( : (xi= for some constants am and integer { }gives an exact penalty functionin the sense that the minimizer of the penalized function is equal to the minimizer of subject to the equality constraints hm with the addition of inequality constraintsone could use (xf (xai |hi ( ) max{ ( ) = = for some constants am bk alternating direction method of multipliers example (alternating direction method of multipliersthe lagrange method is designed to handle convex minimization subject to equality constraints neverthelesssome practical algorithms may still use the penalty function approach in combination with the lagrangian method an example is the alternating direction method of multipliers (admm[ the admm solves problems of the formmin xrn ,zrm (xg(zsubject toax bz ( where pxn pxm and and rn and rm are convex functions the approach is to form an augmented lagrangian (xzb: (xg(zb(ax bz ckax bz ck where is penalty parameterand are dual variables the admm then iterates through updates of the following formx( + argmin (xz(tb(txrn argmin ( ( + zb(tzrm ( + (tax( + bz( + ( +
15,183
suppose that ( has inequality constraints only barrier functions are an important example of penalty functions that can handle inequality constraints the prototypical example is logarithmic barrier function which gives the unconstrained optimizatione (xf (xn ln(- ( )) = such that the minimizer of tends to the minimizer of as direct minimization of via an unconstrained minimization algorithm is frequently too difficult insteadit is common to combine the logarithmic barrier function with the lagrangian method as follows the idea is to introduce nonnegative auxiliary or slack variables sk that satisfy the equalities (xs for all these equalities ensure that the inequality constraints are maintainedg ( - for all theninstead of the unconstrained optimization of we consider the unconstrained optimization of the lagrangianl(xsbf (xn barrier functions ln = ( (xs )( = where and are the lagrange multipliers for the equalities ( )+ observe how the logarithmic barrier function keeps the slack variables positive in additionwhile the optimization of is over dimensions (recall that rn )the optimization of the lagrangian function is over dimensions despite this enlargement of the search space with the variables and bthe optimization of the lagrangian is easier in practice than the direct optimization of example (interior-point method for nonnegativityone of the simplest and most common constrained optimization problems can be formulated as the minimization of (xsubject to nonnegative xthat ismin > (xin this casethe lagrangian with logarithmic barrier ( isx (xsbf (xn ln sk ( xk the kkt conditions in theorem are necessary condition for minimizerand yield the nonlinear system for [xsb] (xb - / - where / is shorthand notation for column vector with components { / to solve this systemwe can use newton' method for root finding (seefor examplealgorithm )which requires formula for the matrix of jacobi of herethis ( nx ( nmatrix ish - jl (xsbo - slack variables
15,184
where is the hessian of at xd :diag ( /( :[ -iis an ( nmatrixand )is an diagonal matrix#- : - furtherwe define hn :( be- )- ( )- using this notation and applying the matrix blockwise inversion formula ( )we obtain the inverse of the matrix of jacobihn #- - - hn -hn be - - bhn - - bhn be- be -dhn dhn dhn thereforethe search direction in newton' root-finding method is given byf (xb dx - / dx - - - / (dx swhere dx :-( )- ( / dx and we have assumed that is positive-definite matrix if at any step of the iteration the matrix fails to be positive-definitethen newton' root-finding algorithm may fail to converge thusany practical implementation will have to include fail-safe feature to guard against this possibility in summaryfor given penalty parameter we can locate the approximate nonnegative minimizer of usingfor examplethe version of the newton-raphson rootfinding method given in algorithm interior-point method in practiceone needs to choose sufficiently small value for nso that the output xn of algorithm is good approximation to xargmin > (xalternativelyone can create decreasing sequence of penalty parameters and compute the corresponding solutions xn xn of the penalized problems in the so-called interiorpoint methoda given xni is used as an initial guess for computing xni+ and so on until the approximation to the minimizer xargmin > (xis deemed accurate here is an matrix of zeros and is the identity matrix
15,185
algorithm approximating xargmin > (xwith logarithmic barrier inputan initial guess and stopping error outputthe approximate nonnegative minimizer xn of xb /sdx while kdxk and budget is not exhausted do compute the gradient and the hessian of at /ss /sw if ( diag( ) then /if cholesky successful compute the cholesky factor satisfying llh diag( dx - (computed by forward substitution dx -dx (computed by backward substitution else dx / /if cholesky failsdo steepest descent ds dx sdb dsa while min { ds do / /ensure nonnegative slack variables dxreturn xn dsb db further reading for an excellent introduction to convex optimization and lagrangian duality see [ classical text on optimization algorithms andin particularon quasi-newton methods is [ for more details on the alternating direction method of multipliers see [
15,186
robability and tatistics the purpose of this is to establish the baseline probability and statistics background for this book we review basic concepts such as the sum and product rules of probabilityrandom variables and their probability distributionsexpectationsindependenceconditional probabilitytransformation ruleslimit theoremsand markov chains the properties of the multivariate normal distribution are discussed in more detail the main ideas from statistics are also reviewedincluding estimation techniques (such as maximum likelihood estimation)confidence intervalsand hypothesis testing random experiments and probability spaces the basic notion in probability theory is that of random experimentan experiment whose outcome cannot be determined in advance mathematicallya random experiment is modeled via triplet (ohmhp)whereohm is the set of all possible outcomes of the experimentcalled the sample space random experiment sample space is the collection of all subsets of ohm to which probability can be assignedsuch subsets are called events events is probability measurewhich assigns to each event number [abetween and indicating the likelihood that the outcome of the random experiment lies in probability measure any probability measure must satisfy the following kolmogorov axiomskolmogorov axioms [ for every event [ohm for any sequence of eventshi ai [ai ] with strict equality whenever the events are disjoint (that isnon-overlapping (
15,187
sum rule union bound elementary event when ( holds as an equalityit is often referred to as the sum rule of probability it simply states that if an event can happen in number of different but not simultaneous waysthe probability of that event is the sum of the probabilities of the comprising events if the events are allowed to overlapthen the inequality ( is called the union bound in many applications the sample space is countablethat isohm { in this case the easiest way to specify probability measure is to first assign number pi to each elementary event {ai }with pi and then to define [api for all ohm :ai discrete probability space here the collection of events can be taken to be equal to the collection of all subsets of ohm the triple (ohmhpis called discrete probability space this idea is graphically represented in figure each element ai represented by dotis assigned weight (that isprobabilitypi indicated by the size of the dot the probability of the event is simply the sum of the weights of all the outcomes in figure discrete probability space remark (equilikely principlea special case of discrete probability space occurs when random experiment has finitely many outcomes that are all equally likely in this case the probability measure is given by [aequilikely principle ( where |adenotes the number of outcomes in and |ohmis the total number of outcomes thusthe calculation of probabilities reduces to counting numbers of outcomes in events this is called the equilikely principle random variable | |ohmrandom variables and probability distributions it is often convenient to describe random experiment via "random variables"representing numerical measurements of the experiment random variables are usually denoted by capital letters from the last part of the alphabet from mathematical point of viewa random variable is function from ohm to such that sets of the form { :{ ohm ( bare events (and so can be assigned probability
15,188
all probabilities involving random variable can be computedin principlefrom its cumulative distribution function (cdf)defined by (xp[ ] cumulative distribution function for example [ bp[ bp[ af(bf(afigure shows generic cdf note that any cdf is right-continuousincreasingand lies between and figure cumulative distribution function (cdfa cdf fd is called discrete if there exist numbers and probabilities (xi summing up to such that for all (xi ( fd (xdiscrete cdf xi such cdf is piecewise constant and has jumps of sizes ( ) ( )at points respectively the function (xis called probability mass function or discrete probability density function (pdfit is often easier to use the pdf rather than the cdfsince probabilities can simply be calculated from it via summationx [ bf ( )xb as illustrated in figure figure discrete probability density function (pdfthe darker area corresponds to the probability [ bdiscrete pdf
15,189
continuous cdf cdf fc is called continuous if there exists positive function such that for all fc (xf (udu ( pdf note that such an fc is differentiable (and hence continuouswith derivative the function is called the probability density function (continuous pdfby the fundamental theorem of integrationwe have [ bf(bf(af (xdx thuscalculating probabilities reduces to integrationas illustrated in figure figure continuous probability density function (pdfthe shaded area corresponds to the probability [ ]with being here the interval (abremark (probability density and probability massit is important to note that we deliberately use the same name"pdf"and symbolf in both the discrete and the continuous caserather than distinguish between probability mass function (pmfand probability density function (pdffrom theoretical point of view the pdf plays exactly the same role in the discrete and continuous cases we use the notation distx and to indicate that has distribution distpdf and cdf tables and list number of important continuous and discrete distributions note that in table is the gamma functiong( - xa- dxa in advanced probabilitywe would say "absolutely continuous with respect to the lebesgue measure
15,190
table commonly used continuous distributions name notation uniform [abnormal (us gamma gamma(alf ( - - la xa- -lx ( - la - - -lx (ainverse gamma invgamma(alx parameters [aba ral ral exponential exp(ll -lx rl> beta beta(abg( ba- ( ) - ( ) ( [ ab weibull weib(alal (lx) - -(lxral pareto pareto(alral student tn al ( lx)-( + !-( + )/ gn+ np > rmn nf (mna / ( - )/ gm+ ( /ngm gn [ ( / ) ]( + )/ the gamma( / / distribution is called the chi-squared distribution with degrees of freedomdenoted kh the distribution is also called the cauchy distribution table commonly used discrete distributions name notation bernoulli ber(pbinomial bin(npdiscrete uniform { ngeometric geom(ppoisson poi(lf (xx parameters ( ) - { ( ) - { nx { nn nn ( px- { > - lx { kh distribution
15,191
expectation expectation it is often useful to consider different kinds of numerical characteristics of random variable one such quantity is the expectationwhich measures the "averagevalue of the distribution the expectation (or expected value or meanof random variable with pdf denoted by ex or [ (and sometimes )is defined by discrete caser (xex (xdx continuous case if is random variablethen function of xsuch as or sin( )is again random variable moreoverthe expected value of function of is simply weighted average of the possible values that this function can take that isfor any real function discrete caser (xf (xeh(xh(xf (xdx continuous casevariance standard deviation provided that the sum or integral are well-defined the variance of random variable xdenoted by var (and sometimes )is defined by var ( [ ]) ex (ex) the square root of the variance is called the standard deviation table lists the expectations and variances for some well-known distributions both variance and standard deviation measure the spread or dispersion of the distribution notehoweverthat the standard deviation measures the dispersion in the same units as the random variableunlike the variancewhich uses squared units table expectations and variances for some well-known distributions dist ex var bin(npnp np( pgeom( poi(lu[abexp(ltn dist ex var gamma(ala (us beta(aba + ab ( + ) ( + +ba+ ( ) weib(alg( /aal ( /ag( /aa al ( - (mnn - ( ( + - ( - ) ( - ( ( we only use brackets in an expectation if it is unclear with respect to which random variable the expectation is taken
15,192
it is sometimes useful to consider the moment generating function of random variable this is the function defined by (se sx moment generating function ( the moment generating functions of two random variables coincide if and only if the random variables have the same distributionsee also theorem example (moment generation function of the gamma(aldistributionlet gamma(alfor lthe moment generating function of at is given by (see sx -lx la xa- dx (   -( - ) ( ) xa- dx - (al- { sx pdf of gamma( , -sfor lm(sinterestinglythe moment generating function has much simpler formula than the pdf joint distributions distributions for random vectors and stochastic processes can be specified in much the same way as for random variables in particularthe distribution of random vector [ xn ]is completely determined by specifying the joint cdf fdefined by joint cdf ( xn [ xn xn ]xi ri similarlythe distribution of stochastic processthat isa collection of random variables {xt }for some index set is completely determined by its finite-dimensional distributionsspecificallythe distributions of the random vectors [xt xtn ]for every choice of and tn by analogy to the one-dimensional casea random vector [ xn ]taking values in rn is said to have pdf ifin the continuous casez [ bf (xdx( stochastic process for all -dimensional rectangles replace the integral with sum for the discrete case the pdf is also called the joint pdf of xn the pdfs of the individual components -called marginal pdfs -can be recovered from the joint pdf by "integrating out the other variablesfor examplefor continuous random vector [xy]with pdf the pdf fx of is given by fx (xf (xydy joint pdf marginal pdf
15,193
conditioning and independence conditional probabilities and conditional distributions are used to model additional information on random experiment independence is used to model lack of such information conditional probability conditional probability suppose some event ohm occurs given this factevent will occur if and only if occursand the relative chance of occurring is therefore [ ]/ [ ]provided [ this leads to the definition of the conditional probability of given bp[ bp[ bp[bif [ ( the above definition breaks down if [ such conditional probabilities must be treated with more care [ three important consequences of the definition of conditional probability areproduct rule product rulefor any sequence of events an [ an [ [ [ [an an- ]( using the abbreviation ak : ak law of total probability law of total probabilityif {bi forms partition of ohm (that isbi and bi ohm)then for any event [ap[ bi [bi ( bayesrule bayesrulelet {bi form partition of ohm thenfor any event with [ [ [ [ ap [ bi [bi independent events ( independence two events and are said to be independent if the knowledge that has occurred does not change the probability that occurs that isab independent [ bp[asince [ bp[bp[ ]an alternative definition of independence is ab independent [ bp[ap[bthis definition covers the case where [ and can be extended to arbitrarily many eventsevents are said to be (mutuallyindependent if for any and any choice of distinct indices ik [ai ai aik [ai [ai [aik
15,194
the concept of independence can also be formulated for random variables random variables are said to be independent if the events {xi xi }{xin xin are independent for all finite choices of distinct indices in and values xi xin an important characterization of independent random variables is the following (for proofsee [ ]for exampleindependent random variables theorem independence characterization random variables xn with marginal pdfs fx fxn and joint pdf are independent if and only if ( xn fx ( fxn (xn for all xn ( many probabilistic models involve random variables that are independent and identically distributedabbreviated as iid we use this abbreviation throughout this book iid expectation and covariance similar to the univariate casethe expected value of real-valued function of random vector is weighted average of all values that (xcan take specificallyin the continuous caseeh(xh(xf (xdx in the discrete case replace this multidimensional integral with sum using this resultit is not difficult to show that for any collection of dependent or independent random variables xn [ bn xn ex bn exn ( for all constants ab bn moreoverfor independent random variablese[ xn ex ex exn ( we leave the proofs as an exercise the covariance of two random variables and with expectations ux and uy respectivelyis defined as cov(xye[( ux )( uy )this is measure of the amount of linear dependency between the variables let var and var scaled version of the covariance is given by the correlation coefficientcov(xy%(xys sy the following properties follow directly from the definitions of variance and covariance var ex var[ax ba cov(xye[xyux uy cov(xycov(yxcovariance correlation coefficient
15,195
-sx sy cov(xy sx sy cov(ax byza cov(xzb cov(yz cov(xxs var[ ys cov(xy if and are independentthen cov(xy as consequence of properties and we have that for any sequence of independent random variables xn with variances var[ an xn expectation vector ( for any choice of constants an for random column vectorssuch as [ xn ]it is convenient to write the expectations and covariances in vector and matrix notation for random vector we define its expectation vector as the vector of expectations [ un ][ex exn ]similarlyif the expectation of matrix is the matrix of expectationsthen given two random vectors rn and rm the matrix cov(xye[( ex)( ey)( has (ij)-th element cov(xi [(xi exi )( ey ) consequence of this definition is that cov(axbyacov(xy)bcovariance matrix where and are two matrices with and columnsrespectively the covariance matrix of the vector is defined as the nxn matrix cov(xxthe covariance matrix is also denoted as var(xcov(xx)in analogy with the scalar identity var(xcov(xxa useful application of the cyclic property of the trace of matrix (see theorem is the following theorem expectation of quadratic form let be an matrix and an -dimensional random vector with expectation vector and covariance matrix the random variable :xax has expectation tr(asuau proofsince is scalarit is equal to its trace nowusing the cyclic propertyey tr(ye tr(xaxe tr(axxtr( [xx]tr( ( uu)tr(astr(auutr(asuau
15,196
conditional density and conditional expectation suppose and are both discrete or both continuouswith joint pdf and suppose fx ( thenthe conditional pdf of given is given by fy| ( xf (xyfx (xfor all conditional pdf ( in the discrete casethe formula is direct translation of ( )with fy| ( xp[ xin the continuous casea similar interpretation in terms of densities can be usedseefor example[ page the corresponding distribution is called the conditional distribution of given note that ( implies that (xyfx (xfy ( xconditional distribution this is useful when the marginal and conditional pdfs are givenrather than the joint one more generallyfor the -dimensional case we have ( xn fx ( fx ( fxn ,xn- (xn xn- )( which is in essence rephrasing of the product rule ( in terms of probability densities as conditional pdf has all the properties of an ordinary pdfwe may define expectations with respect to it the conditional expectation of random variable given is defined as discrete casey fy| ( xr [ ( fy| ( xdy continuous case conditional expectation note that [ xis function of the corresponding random variable is written as [ xa similar formalism can be used when conditioning on sequence of random variables xn the conditional expectation has similar properties to the ordinary expectation other useful properties (seefor example[ ]are tower propertyif ey existsthen [ xey ( taking out what is knownif ey existsthen [xy xxe[ xc functions of random variables let [ xn ]be column vector in rn and an matrix the mapping zwith axis linear transformationas discussed in section now consider random vector [ xn ]and let :ax then is random vector in rm the following theorem details how the distribution of is related to that of
15,197
theorem linear transformation if has an expectation vector and covariance matrix then the expectation vector of is uz ux ( and the covariance matrix of is ( ifin additiona is an invertible matrix and is continuous random vector with pdf then the pdf of the continuous random vector ax is given by (zf ( - zz rn det( )( where det( )denotes the absolute value of the determinant of proofwe have ez [axa ex au and [( )( ) [ ( )( ( )) [( )( )]aa afor invertible and continuous (as opposed to discrete)let ax and - consider the -dimensional cube [ hx [zn zn hthenp[ chn ( ) by definition of the joint density of let be the image of under - -that isall points such that ax recall from section that any matrix linearly transforms an -dimensional rectangle with volume into an -dimensional parallelepiped with volume det( )thusin addition to the above expression for [ ]we also have [ cp[ dhn det( - ) (xhn det( )|- (xequating these two expressions for [ ]dividing both sides by hn and letting go to we obtain ( for generalization of the linear transformation rule ( )consider an arbitrary mapping ( )written outg (xx (xgn (xxn
15,198
theorem transformation rule let be an -dimensional vector of continuous random variables with pdf let ( )where is an invertible mapping with inverse - and matrix of jacobi that isthe matrix of partial derivatives of thenat (xthe random vector has pdf (zf (xf - ( )det( - ( ))| rn det( ( ))( prooffor fixed xlet ( )and thus - (zin the neighborhood of xthe function behaves like linear functionin the sense that ( dg(xj (xd for small vectors dsee also section consequentlyan infinitesimally small -dimensional rectangle at with volume is transformed into an infinitesimally small -dimensional parallelepiped at with volume det( ( ))nowas in the proof of the linear caselet be small cube around (xwith volume hn let be the image of under - thenhn (zp[ chn det( - ( )) ( )and since det( - ( )) /det( ( ))|( follows as goes to typicallyin coordinate transformations it is - that is given -that isan expression for as function of example (polar transformsuppose xy are independent and have standard normal distribution the joint pdf is fx, (xy ( + (xyr in polar coordinates we have cos th and sin th( where is the radius and th [ pthe angle of the point (xywhat is the joint pdf of and thby the radial symmetry of the bivariate normal distributionwe would expect th to be uniform on ( pbut what is the pdf of rto work out the joint pdfconsider the inverse transformation - defined by - cos th -th sin th the corresponding matrix of jacobi is cos th jg- (rthsin th - sin th cos th
15,199
which has determinant since (cos th sin thr it follows by the transformation rule ( that the joint pdf of and th is given by fr,th (rthfx, (xyr re th ( ) by integrating out th and rrespectivelywe find fr (rr - / and fth (th /( psince fr,th is the product of fr and fth the random variables and th are independent normal distribution multivariate normal distribution the normal (or gaussiandistribution -especially its multidimensional version -plays central role in data science and machine learning recall from table that random variable is said to have normal distribution with parameters and if its pdf is given by - ( (xe we write (us the parameters and are the expectation and variance of the distributionrespectively if and then (xe- / standard normal and the distribution is known as the standard normal distribution the cdf of the standard normal distribution is often denoted by ph and its pdf by ph in figure the pdf of the (us distribution for various and is plotted ( , / ( , ( , - - figure the pdf of the (us distribution for various and we next consider some important properties of the normal distribution theorem standardization let (us and define ( )/ then has standard normal distribution