id
int64
0
25.6k
text
stringlengths
0
4.59k
14,900
we see that gtn in ( is the average of collection of normal pdfswhere each normal distribution is centered at the data point xi and has covariance matrix id major question is how to choose the bandwidth so as to best approximate the unknown pdf choosing very small will result in "spikyestimatewhereas large will produce an over-smoothed estimate that may not identify important peaks that are present in the unknown pdf figure illustrates this phenomenon in this case the data are comprised of points uniformly drawn from the unit square the true pdf is thus on [ ] and elsewhere figure two two-dimensional gaussian kdeswith (leftand (rightlet us write the gaussian kde in ( as xi ph gtn ( sn = sd ( where kzk rd ( / ( pis the pdf of the -dimensional standard normal distribution by choosing different probability density ph in ( )satisfying ph(xph(-xfor all xwe can obtain wide variety of kernel density estimates simple pdf ph isfor examplethe uniform pdf on [- ] - if [- ] ph( otherwise ph(zfigure shows the graph of the corresponding kdeusing the same data as in figure and with bandwidth we observe qualitatively similar behavior for the gaussian and uniform kdes as rulethe choice of the function ph is less important than the choice of the bandwidth in determining the quality of the estimate the important issue of bandwidth selection has been extensively studied for onedimensional data to explain the ideaswe use our usual setup and let { xn be the observed (one-dimensionaldata from the unknown pdf firstwe define the loss function as (xg( )) lossf ( ) ( )( (
14,901
figure two-dimensional uniform kdewith bandwidth the risk to minimize is thus `( : lossf ( ) ( ) (xg( )) dx we bypass the selection of class of approximation functions by choosing the learner to be specified by ( for fixed the objective is now to find that minimizes the generalization risk `(gt ( )or the expected generalization risk `(gt ( )the generalization risk is in this case (xgt ( )dx (xdx ( )gt ( sdx ( sdx minimizing this expression with respect to is equivalent to minimizing the last two termswhich can be written as xi ph - gt ( sdx = this expression in turn can be estimated by using test sample { xn from yielding the following minimization problemn xx xi ph dxgt (xi ph min = = = - xi - ph where ph - ph dx in the case of the gaussian kernel ( with to estimate in this way clearly requires test sampleor at least an application of cross-validation another approach is to minimize the expected generalization risk(that isaveraged over all training sets) (xgt ( )) dx this is called the mean integrated squared error (miseit can be decomposed into an integrated squared bias and integrated variance componentz (xegt ( )dx var(gt ( )dx mean integrated squared error
14,902
typical analysis now proceeds by investigating how the mise behaves for large nunder various assumptions on for exampleit is shown in [ thatfor and ns the asymptotic approximation to the mise of the gaussian kernel density estimator ( (for is given by kf ( ps where : ( )) dx the asymptotically optimal value of is the minimizer ! / ( : gaussian rule of thumb theta kde to compute the optimal sin ( )one needs to estimate the functional the gaussian rule of thumb is to assume that is the density of the (xs distributionwhere and are the sample mean and variance of the datarespectively [ in this case - - / / and the gaussian rule of thumb becomes! / srot - / we recommendhoweverthe fast and reliable theta kde of [ ]which chooses the bandwidth in an optimal way via fixed-point procedure figures and illustrate common problem with traditional kdesfor distributions on bounded domainsuch as the uniform distribution on [ ] the kde assigns positive probability mass outside this domain an additional advantage of the theta kde is that it largely avoids this boundary effect we illustrate the theta kde with the following example example (comparison of gaussian and theta kdesthe following python program draws an iid sample from the exp( distribution and constructs gaussian kernel density estimate we see in figure that with an appropriate choice of the bandwidth good fit to the true pdf can be achievedexcept at the boundary the theta kde does not exhibit this boundary effect moreoverit chooses the bandwidth automaticallyto achieve superior fit the theta kde source code is available as kde py on the book' github site gaussian kde theta kde true density figure kernel density estimates for exp( )-distributed data
14,903
gausthetakde py import matplotlib pyplot as plt import numpy as np from kde import sig sig sig ** np sqrt ( np pi)/sig constants phi lambda , np exp (-( - ** /( sig )unscaled kernel lambda xnp exp(- )*( > true pdf ** sample size -np log(np random uniform (size= ))generate data via it method xx np arange - , , dtype " ")plot range phis np zeros (len(xx)for in range ( , )phis phis phi(xx , [ ]phis *phis/ plt plot(xx ,phis ,' ')plot gaussian kde [bandwidth ,density ,xmesh ,cdfkde( , ** , max( )idx xmesh < plt plotxmesh [idx]density [idx ])plot theta kde plt plot(xx , (xx))plot true pdf clustering via mixture models clustering is concerned with the grouping of unlabeled feature vectors into clusterssuch that samples within cluster are more similar to each other than samples belonging to different clusters usuallyit is assumed that the number of clusters is known in advancebut otherwise no prior information is given about the data applications of clustering can be found in the areas of communicationdata compression and storagedatabase searchingpattern matchingand object recognition common approach to clustering analysis is to assume that the data comes from mixture of (usually gaussiandistributionsand thus the objective is to estimate the parameters of the mixture model by maximizing the likelihood function for the data direct optimization of the likelihood function in this case is not simple taskdue to necessary constraints on the parameters (more about this laterand the complicated nature of the likelihood functionwhich in general has great number of local maxima and saddle-points popular method to estimate the parameters of the mixture model is the em algorithmwhich was discussed in more general setting in section in this section we explain the basics of mixture modeling and explain the workings of the em method in this context in additionwe show how direct optimization methods can be used to maximize the likelihood mixture models let :{ xn be iid random vectors taking values in some set rd each xi being distributed according to the mixture density ( thw ph (xwk phk ( ) ( mixture density
14,904
weights where ph phk are probability densities (discrete or continuouson xand the positive weights wk sum up to this mixture pdf can be interpreted in the following way let be discrete random variable taking values with probabilities wk and let be random vector whose conditional pdfgiven zis phz by the product rule ( )the joint pdf of and is given by phz, (zxphz (zph ( zwz phz (xand the marginal pdf of is found by summing the joint pdf over the values of zwhich gives ( random vector can thus be simulated in two steps firstdraw according to the probabilities [ zwz then draw according to the pdf phz as only contain the {xi variablesthe {zi are viewed as latent variables we can interpret zi as the hidden label of the cluster to which xi belongs typicallyeach phk in ( is assumed to be known up to some parameter vector ek it is customary in clustering analysis to work with gaussian mixturesthat iseach density phk is gaussian with some unknown expectation vector uk and covariance matrix sk we gather all unknown parametersincluding the weights {wk }into parameter vector th as usualt { xn denotes the outcome of as the components of are iidtheir (jointpdf is given by ( th: = (xi thn = = wk phk (xi uk sk ( following the same reasoning as for ( )we can estimate th from an outcome by maximizing the log-likelihood function (th :ln (xi thln wk phk (xi uk sk ( = = = howeverfinding the maximizer of (th tis not easy in generalsince the function is typically multiextremal example (clustering via mixture modelsthe data depicted in figure consists of data points that were independently generated from three bivariate normal distributionswhose parameters are given in that same figure for each of these three distributionsexactly points were generated ideallywe would like to cluster the data into three clusters that correspond to the three cases to cluster the data into three groupsa possible model for the data is to assume that the points are iid draws from an (unknownmixture of three -dimensional gaussian distributions this is sensible approachalthough in reality the data were not simulated in this way it is instructive to understand the difference between the two models in the mixture modeleach cluster label takes the value { with equal probabilityand hencedrawing the labels independentlythe total number of points in each cluster would other common mixture distributions include student and beta distributions
14,905
cluster - - - - - - - mean vector - - - - covariance matrix - - figure cluster the data points (leftinto three clusterswithout making any assumptions about the probability distribution of the data in factthe data were generated from three bivariate normal distributionswhose parameters are listed on the right be bin( / distributed howeverin the actual simulationthe number of points in each cluster is exactly neverthelessthe mixture model would be an accurate (although not exactmodel for these data figure displays the "targetgaussian mixture density for the data in figure that isthe mixture with equal weights and with the exact parameters as specified in figure figure the target mixture density for the data in figure in the next section we will carry out the clustering by using the em algorithm em algorithm for mixture models as we saw in section instead of maximizing the log-likelihood function ( directly from the data { xn }the em algorithm first augments the data with the vector of latent variables -in this case the hidden cluster labels { zn the idea is that is data augmentation
14,906
only the observed part of the complete random data ( )which were generated via the two-step procedure described above that isfor each data point xfirst draw the cluster label { kaccording to probabilities { wk and thengiven zdraw from phz the joint pdf of and is (tz thcomplete-data log-likelihood wzi phzi (xi ) = which is of much simpler form than ( it follows that the complete-data loglikelihood function ln[wzi phzi (xi )( (th tzi= is often easier to maximize than the original log-likelihood ( )for any given (tzbutof course the latent variables are not observed and so (th tzcannot be evaluated in the -step of the em algorithmthe complete-data log-likelihood is replaced with the expectation (th tz)where the subscript in the expectation indicates that is distributed according to the conditional pdf of given tthat iswith pdf (zg( tthg(tz th( note that (zis of the form ( pn (zn so thatgiven tthe components of are independent of each other the em algorithm for mixture models can now be formulated as follows algorithm em algorithm for mixture models inputdata tinitial guess th( outputapproximation of the maximum likelihood estimate while stopping criterion is not met do expectation stepfind ( ( : ( tth( - and ( (th: ( ) (th tz( ( maximization steplet th argmaxth (th - + return th(ta possible termination condition is to stop when (th(ttl(th( - tl(th(tte for some small tolerance as was mentioned in section the sequence of loglikelihood values does not decrease with each iteration under certain continuity conditionsthe sequence {th(tis guaranteed to converge to local maximizer of the loglikelihood convergence to global maximizer (if it existsdepends on the appropriate choice for the starting value typicallythe algorithm is run from different random starting points for the case of gaussian mixtureseach phk ph(uk sk ) is the density of -dimensional gaussian distribution let th( - be the current guess for the optimal parameter vectorconsisting of the weights { ( - }mean vectors { ( - }and covariance ( - (tmatrices {sk we first determine -the pdf of conditional on -for the given guess th( - as mentioned beforethe components of given are independent
14,907
so it suffices to specify the discrete pdfp(ti sayof each zi given the observed point xi the latter can be found from bayesformula( - (tphk (xi ( - ( - ) (kwk ( nextin view of ( )the function ( (thcan be written as (tq (the (te (tln wzi ln phzi (xi uzi szi ln wzi ln phzi (xi uzi szi = = where the {zi are independent and zi is distributed according to (ti in ( this com(tpletes the -step in the -step we maximize with respect to the parameter ththat iswith respect to the {wk }{uk }and {sk in particularwe maximize = = (ti (kln wk ln phk (xi uk sk pk (tunder the condition wk using lagrange multipliers and the fact that = pi ( gives the solution for the {wk } (tp ( ) ( wk = the solutions for uk and sk now follow from maximizing ni= (ti (kln phk (xi uk sk )leading to pn (ti= (kxi ( uk pn (tp (ki= and pn (ti= pi ( (xi uk )(xi uk ( sk pn (tp (ki= which are very similar to the well-known formulas for the mles of the parameters of gaussian distribution after assigning the solution parameters to th(tand increasing the iteration counter by the steps ( )( )( )and ( are repeated until convergence is reached convergence of the em algorithm is very sensitive to the choice of initial parameters it is therefore recommended to try various different starting conditions for further discussion of the theoretical and practical aspects of the em algorithm we refer to [ example (clustering via emwe return to the data in example depicted in figure and adopt the model that the data is coming from mixture of three bivariate gaussian distributions the python code below implements the em procedure described in algorithm the initial mean vectors {uk of the bivariate gaussian distributions are chosen (from visual inspectionto lie roughly in the middle of each clusterin this case [- - ][- ]and [ - ]the corresponding covariance matrices are initially chosen as identity matriceswhich is appropriate given the observed spread of the data in figure finallythe initial weights are / / / for simplicitythe algorithm stops after iterationswhich in this case is more than enough to guarantee convergence the code and data are available from the book' website in the github folder
14,908
emclust py import numpy as np from scipy stats import multivariate_normal xmat np genfromtxt ('clusterdata csv 'delimiter =',' nd xmat shape np array ([[ / , / , / ]] np array ([- - , ,- , - ]dtype =np float note that if above *allentries were written as integers would be defined to be of integer type which will give the wrong answer np zeros (( , , ) [, , [, , np zeros (( , )for in range ( , )# -step for in range ( , )mvn multivariate_normal [:,ktc[ ,,: [ ,: [ , ]mvn pdf(xmatm-step ( /sum( , )normalize np mean( , reshape ( , for in range ( , ) [:, (xmat [ ,: )/sum( [ ,:]xm xmat [:,kreshape ( , [ ,,:xm (xm* [ ,:] /sum( [ ,:]the estimated parameters of the mixture distribution are given on the right-hand side of figure after relabeling of the clusterswe can observe close match with the parameters in figure the ellipses on the left-hand side of figure show close match between the probability ellipses of the original gaussian distributions (in grayand the estimated ones natural way to cluster each point xi is to assign it to the cluster for which the conditional probability pi (kis maximal (with ties resolved arbitrarilythis gives the clustering of the points into redgreenand blue clusters in the figure for each mixture componentthe contour of the corresponding bivariate normal pdf is shown that encloses of the probability mass
14,909
weight - - - - - - - mean vector - - - - - covariance matrix - - figure the results of the em clustering algorithm applied to the data depicted in figure as an alternative to the em algorithmone can of course use continuous multiextremal optimization algorithms to directly optimize the log-likelihood function (th tln ( thin ( over the set th of all possible th this is done for example in [ ]demonstrating superior results to em when there are few data points closer investigation of the likelihood function reveals that there is hidden problem with any maximum likelihood approach for clustering if th is chosen as large as possible - any mixture distribution is possible to demonstrate this problemconsider figure depicting the probability density functiong(thof mixture of two gaussian distributionswhere th [wu ]is the vector of parameters for the mixture distribution the log-likelihood function is given by (th = ln (xi th)where are the data (indicated by dots in the figure - - figure mixture of two gaussian distributions it is clear that by fixing the mixing constant at (sayand centering the first cluster at one can obtain an arbitrarily large likelihood value by taking the variance of the first cluster to be arbitrarily small similarlyfor higher dimensional databy choosing "pointor "lineclustersor in general "degenerateclustersone can make the value of the likelihood infinite this is manifestation of the familiar overfitting problem for the
14,910
training loss that we already encountered in thusthe unconstrained maximization of the log-likelihood function is an ill-posed problemirrespective of the choice of the optimization algorithmtwo possible solutions to this "overfittingproblem are restrict the parameter set th in such way that degenerate clusters (sometimes called spurious clustersare not allowed run the given algorithm and if the solution is degeneratediscard it and run the algorithm afresh keep restarting the algorithm until non-degenerate solution is obtained the first approach is usually applied to multiextremal optimization algorithms and the second is used for the em algorithm clustering via vector quantization in the previous section we introduced clustering via mixture modelsas form of parametric density estimation (as opposed to the nonparametric density estimation in section the clusters were modeled in natural way via the latent variables and the em algorithm provided convenient way to assign the cluster members in this section we consider more heuristic approach to clustering by ignoring the distributional properties of the data the resulting algorithms tend to scale better with the number of samples and the dimensionality in mathematical termswe consider the following clustering (also called data segmentationproblem given collection { xn of data points in some -dimensional space xdivide this data set into clusters (groupssuch that some loss function is minimized convenient way to determine these clusters is to first divide up the entire space xusing some distance function dist(**on this space standard choice is the euclidean (or distancev (xi xi ) dist(xx kx = manhattan distance other commonly used distance measures on rd include the manhattan distanced = maximum distance and the maximum distance|xi xi max |xi xi = , hamming distance on the set of strings of length dan often-used distance measure is the hamming distanced = {xi xi }that isthe number of mismatched characters for examplethe hamming distance between and is
14,911
we can partition the space into regions as followsfirstwe choose points ck called cluster centers or source vectors for each klet source vectors rk { dist(xck dist(xci for all kbe the set of points in that lie closer to ck than any other center the regions or cells {rk divide the space into what is called voronoi diagram or voronoi tessellation figure shows voronoi tessellation of the plane into ten regionsusing the euclidean distance note that here the boundaries between the voronoi cells are straight line segments in particularif cell ri and share borderthen point on this border must satisfy kx ci kx kthat isit must lie on the line that passes through the point ( ci )/ (that isthe midway point of the line segment between ci and and be perpendicular to ci voronoi tessellation - - figure voronoi tessellation of the plane into ten cellsdetermined by the (redcenters once the centers (and thus the cells {rk }are chosenthe points in can be clustered according to their nearest center points on the boundary have to be treated separately this is moot point for continuous dataas generally no data points will lie exactly on the boundary the main remaining issue is how to choose the centers so as to cluster the data in some optimal way in terms of our (unsupervisedlearning frameworkwe wish to approximate vector via one of ck using piecewise constant vector-valued function ( : = ck { rk }where is the matrix [ ck thusg( cck when falls in region rk (we ignore tieswithin this class of functionsparameterized by cour aim is to minimize the training loss in particularfor the squared-error lossloss(xx kx- the training loss is kxi (xi ) || ck || ( `tn ( ( ) = = xr thusthe training loss minimizes the average squared distance between the centers this framework also combines both the encoding and decoding steps in vector quantization vector quantization
14,912
[ namelywe wish to "quantizeor "encodethe vectors in in such way that each vector is represented by one of source vectors ck such that the loss ( of this representation is minimized most well-known clustering and vector quantization methods update the vector of centersstarting from some initial choice and using iterative (typically gradient-basedprocedures it is important to realize that in this case ( is seen as function of the centerswhere each point is assigned to the nearest centerthus determining the clusters it is well known that this type of problem -optimization with respect to the centers -is highly multiextremal anddepending on the initial clustersgradient-based procedures tend to converge to local minimum rather than global minimum centroids -means one of the simplest methods for clustering is the -means method it is an iterative method wherestarting from an initial guess for the centersnew centers are formed by taking sample means of the current points in each cluster the new centers are thus the centroids of the points in each cell although there exist many different varieties of the -means algorithmthey are all essentially of the following formalgorithm -means inputcollection of points { xn }number of clusters kinitial centers ck outputcluster centers and cells (regions while stopping criterion is not met do rk (empty sets for to do [dist(xi )dist(xi ck )/distances to centers argmin rk rk {xi /assign xi to cluster for to do xrk ck |rk /compute the new center as centroid of points return {ck }{rk thusat each iterationfor given choice of centerseach point in is assigned to its nearest center after all points have been assignedthe centers are recomputed as the centroids of all the points in the current cluster (line typical stopping criterion is to stop when the centers no longer change very much as the algorithm is quite sensitive to the choice of the initial centersit is prudent to try multiple starting valuese chosen randomly from the bounding box of the data points we can see the -means method as deterministic (or "hard"version of the probabilistic (or "soft"em algorithm as follows suppose in the em algorithm we have gaussian mixtures with fixed covariance matrix sk id kwhere should be thought of as being infinitesimally small consider iteration of the em algorithm having obtained the expectation vectors ( - and weights ( - keach point xi is ask (tsigned cluster label zi according to the probabilities pi ( ) given in (
14,913
but for the probability distribution { (ti ( )becomes degenerateputting all its probability mass on argmink kxi uk this corresponds to the -means rule of assigning xi to its nearest cluster center moreoverin the -step ( each cluster center (tk is now updated according to the average of the {xi that have been assigned to cluster we thus obtain the same deterministic updating rule as in -means example ( -means clusteringwe cluster the data from figure via -meansusing the python implementation below note that the data points are stored as matrix xmat we take the same starting centers as in the em examplec [- - ] [- ]and [ - ]note also that squared euclidean distances are used in the computationsas these are slightly faster to compute than euclidean distances (as no square root computations are requiredwhile yielding exactly the same cluster center evaluations kmeans py import numpy as np xmat np genfromtxt ('clusterdata csv 'delimiter =',' nd xmat shape np array ([- - , ,- , - ]]initialize centers cold np zeros ( shape dist np zeros (( , )while np abs( coldsum ( cold copy (for in range ( , )compute the squared distances dist [ ,:np sum (xmat [:,it)** label np argmin (dist , assign the points to nearest centroid minvals np amin(dist , for in range ( , )recompute the centroids [:,inp mean(xmat[np where label = ,:, reshape ( , print ('loss {: }format minvals mean ())loss - - - - - - - - figure results of the -means algorithm applied to the data in figure the thick black circles are the centroids and the dotted lines define the cell boundaries
14,914
we found the cluster centers [- - ] [- ]and [ - ]giving the clustering depicted in figure the corresponding loss ( was found to be clustering via continuous multiextremal optimization as already mentionedthe exact minimization of the loss function ( is difficult to accomplish via standard local search methods such as gradient descentas the function is highly multimodal howevernothing is preventing us from using global optimization methods such as the ce or sco methods discussed in sections and example (clustering via cewe take the same data set as in example and cluster the points via minimization of the loss ( using the ce method the python code below is very similar to the code in example except that now we are dealing with six-dimensional optimization problem the loss function is implemented in the function sclusterwhich essentially reuses the squared distance computation of the -means code in example the ce program typically converges to loss of corresponding to the (globalminimizers [- - ] [- ]and [ - ]which slightly differs from the local minimizers for the -means algorithm clustce py import numpy as np np set_printoptions precision = xmat np genfromtxt ('clusterdata csv 'delimiter =',' nd xmat shape def scluster ( )nd xmat shape dist np zeros (( , )cc reshape ( ,kfor in range ( , )dist [ ,:np sum (xmat cc[:,it)** minvals np amin(dist , return minvals mean (numvar * mu np zeros numvar initialize centers sigma np onesnumvar )* rho nel int( *rho)eps func scluster best_trj np array numvar best_perf np inf trj np zeros shape =(nnumvar )while (np maxsigma )>eps)for in range ( numvar )
14,915
trj [:, (np random randn ( , )sigma [ ]mu[ ]reshape ( , np zeros (nfor in range ( , ) [ifunc(trj[ ]sortedids np argsort (sfrom smallest to largest s_sorted ssortedids best_trj np array (nbest_perf np inf eliteids sortedids range ( nel)elitetrj trj[eliteids ,:mu np mean(elitetrj ,axis = sigma np std(elitetrj ,axis = if(best_perf s_sorted [ ])best_perf s_sorted [ best_trj trjsortedids [ ]print best_perf print best_trj reshape ( , ) [- - - - ] hierarchical clustering it is sometimes useful to determine data clusters in hierarchical manneran example is the construction of evolutionary relationships between animal species establishing hierarchy of clusters can be done in bottom-up or top-down manner in the bottom-up approachalso called agglomerative clusteringthe data points are merged in larger and larger clusters until all the points have been merged into single cluster in the top-down or divisive clustering approachthe data set is divided up into smaller and smaller clusters the left panel of figure depicts hierarchy of clusters in figure each cluster is given cluster identifier at the lowest level are clusters comprised of the original data points (identifiers the union of clusters and form cluster with identifier and the union of and form cluster with identifier in turn the union of clusters and constitutes cluster and so on the right panel of figure shows convenient way to visualize cluster hierarchies using dendrogram (from the greek dendro for treea dendrogram not only summarizes how clusters are merged or splitbut also shows the distance between clustershere on the vertical axis the horizontal axis shows which cluster each data point (labelbelongs to many different types of hierarchical clustering can be performeddepending on how the distance is defined between two data points and between two clusters denote the data set by {xi nas in section let dist(xi be the distance between data points xi and the default choice is the euclidean distance dist(xi kxi let and be two disjoint subsets of { nthese sets correspond to two disjoint subsets (that isclustersof {xi iand { jwe denote the distance between agglomerative clustering divisive clustering dendrogram
14,916
distance labels figure lefta cluster hierarchy of clusters rightthe corresponding dendrogram linkage these two clusters by (ijby specifying the function dwe indicate how the clusters are linked for this reason it is also referred to as the linkage criterion we give number of examplessingle linkage the closest distance between the clusters dmin (ij:min dist(xi iijj complete linkage the furthest distance between the clusters dmax (ij:max dist(xi iijj group average the mean distance between the clusters note that this depends on the cluster sizes xx dist(xi davg (ij:| |jii jj ward' linkage for these linkage criteriax is usually assumed to be rd with the euclidean distance another notable measure for the distance between clusters is ward' minimum variance linkage criterion herethe distance between clusters is expressed as the additional amount of "variance(expressed in terms of the sum of squaresthat would be introduced if the two clusters were merged more preciselyfor any set of indices (labelslet xk kk xk /|kdenote its corresponding cluster mean then kx xj ( dward (ij:kxk xij kxi xi kij ii jj it can be shown (see exercise that the ward linkage depends only on the cluster means and the cluster sizes for and jdward (ij| |jkxi xj | |
14,917
in software implementationsthe ward linkage function is often rescaled by multiplying it by factor of in this waythe distance between one-point clusters {xi and { is the squared euclidean distance kxi having chosen distance on and linkage criteriona general agglomerative clustering algorithm proceeds in the following "greedymanner algorithm greedy agglomerative clustering inputdistance function distlinkage function dnumber of clusters outputthe label sets for the tree initialize the set of cluster identifiersi { initialize the corresponding label setsli { } initialize distance matrix [di with di ({ } } for to do find and in such that di is minimal create new label set lk :li add the new identifier to and remove the old identifiers and from update the distance matrix with respect to the identifiers ijand return li initiallythe distance matrix contains the (linkagedistances between the one-point clusters containing one of the data points xn and hence with identifiers finding the shortest distance amounts to table lookup in when the closest clusters are foundthey are merged into new clusterand new identifier (the smallest positive integer that has not yet been used as an identifieris assigned to this cluster the old identifiers and are removed from the cluster identifier set the matrix is then updated by adding -th column and row that contain the distances between and any this updating step could be computationally quite costly if the cluster sizes are large and the linkage distance between the clusters depends on all points within the clusters fortunatelyfor many linkage functionsthe matrix can be updated in an efficient manner suppose that at some stage in the algorithmclusters and jwith identifiers and jrespectivelyare merged into cluster with identifier let mwith identifier mbe previously assigned cluster an update rule of the linkage distance dkm between and is called lance-williams update if it can be written in the form dkm dim jm di |dim jm |where ad depend only on simple characteristics of the clusters involvedsuch as the number of elements within the clusters table shows the update constants for number of common linkage functions for examplefor single linkagedim is the minimal distance between and mand jm is the minimal distance between and the smallest of these is the minimal distance between and that isdkm min{dim jm dim / jm / |dim jm |/ lancewilliams
14,918
table constants for the lance-williams update rule for various linkage functionswith ni nm denoting the number of elements in the corresponding clusters linkage single complete group avg ward linkage matrix / / ni ni ni nm ni nm / / nj ni nm ni nm - / / -nm ni nm in practicealgorithm is run until single cluster is obtained instead of returning the label sets of all clustersa linkage matrix is returned that contains the same information at the end of each iteration (line the linkage matrix stores the merged labels and jas well as the (minimaldistance di other information such as the number of elements in the merged cluster can also be stored dendrograms and cluster labels can be directly constructed from the linkage matrix in the following examplethe linkage matrix is returned by the method agg_cluster example (agglomerative hierarchical clusteringthe python code below gives basic implementation of algorithm using the ward linkage function the methods fcluster and dendrogram from the scipy module can be used to identify the labels in cluster and to draw the corresponding dendrogram aggcluster py import numpy as np from scipy spatial distance import cdist def update_distances ( , ,jsizes )distances for merged cluster shape [ np inf np ones( + for in range ( )update distances [ (sizes [ ]sizes [ ])* [ ,ksizes [ ]sizes [ ])* [ ,ksizes [ ]* [ , ]/sizes [isizes [jsizes [ ]infs np inf np ones(narray of infinity [ ,:, [:, ], [ ,:, [:,jinfs ,infs ,infs ,infs deactivate new_d np inf np ones (( + , + )new_d [ : , :nd copy old matrix into new_d new_d [- ,:]new_d [- , add new row and column return new_d def agg_cluster ( ) shape [ sizes np ones(nd cdist (xxmetric 'sqeuclidean 'initialize dist matrix np fill_diagonal (dnp inf np ones( shape [ ]) np zeros (( - , )linkage matrix encodes hierarchy tree for in range ( - )
14,919
, np unravel_index ( argmin ( shape minimizer pair sizes np append (sizes sizes [isizes [ ] [ ,:]np array ([ijnp sqrt( [ , ]sizes - ]] update_distances (di,jsizes update distance matr return import scipy cluster hierarchy as np genfromtxt ('clusterdata csv ',delimiter =','read the data agg_cluster (xform the linkage matrix dendrogram (zscipy can produce dendrogram from fcluster function assigns cluster ids to all points based on cl fcluster (zcriterion 'maxclust ' = import matplotlib pyplot as plt plt figure ( plt clf (cols ['red ','green ','blue 'colors [cols[ - for in clplt scatter ( [, [, ,ccolors plt show (note that the distance matrix is initialized with the squared euclidean distanceso that the ward linkage is rescaled by factor of alsonote that the linkage matrix stores the square root of the minimal cluster distances rather than the distances themselves we leave it as an exercise to check that by using these modifications the results agree with the linkage method from scipysee exercise in contrast to the bottom-up (agglomerativeapproach to hierarchical clusteringthe divisive approach starts with one clusterwhich is divided into two clusters that are as "dissimilaras possiblewhich can then be further dividedand so on we can use the same linkage criteria as for agglomerative clustering to divide parent cluster into two child clusters by maximizing the distance between the child clusters although it is natural to try to group together data by separating dissimilar ones as far as possiblethe implementation of this idea tends to scale poorly with the problem is related to the well-known max-cut problemgiven an matrix of positive costs ci ij { }partition the index set { ninto two subsets and such that the total cost across the setsthat isxx jk max-cut problem jj kk is maximal if instead we maximize according to the average distancewe obtain the group average linkage criterion example (divisive clustering via cethe following python code is used to divide small data set (of size into two parts according to maximal group average linkage it uses short cross-entropy algorithm similar to the one presented in example given vector of probabilities {pi }the algorithm generates an matrix of bernoulli random variables with success probability pi for column for each rowthe and divide the index set into two clustersand the corresponding average linkage
14,920
distance is computed the matrix is then sorted row-wise according to these distances finallythe probabilities {pi are updated according to the mean values of the best rows the process is repeated until the {pi degenerate to binary vector this then presents the (approximatesolution clustce py import numpy as np from numpy import genfromtxt from scipy spatial distance import squareform from scipy spatial distance import pdist import matplotlib pyplot as plt def ( , ) np where ( == [ { , is the partition np where ( == [ tmp [ tmp tmp [:, return np mean(tmpthe size of the cut def maxcut ( , ,eps ,rho alpha ) shape [ ne int(rho*np / np ones(np[ while (np max(np minimum ( ,np subtract ( , ))eps) np array (np random uniform ( , ,( , )<=pdtype =np int sx np zeros (nfor in range ( )sx[is( [ ],dsortsx np flip(np argsort (sx)print (gamma ",sxsortsx [ne - ]best =",sxsortsx [ ]]elids sortsx [ neelites xelids pnew np mean(elites axis = alpha *pnew ( alpha )* return np round (pxmat genfromtxt ('clusterdata csv 'delimiter =',' xmat shape [ squareform pdist (xmat) eps *- rho alpha ce pout maxcut ( , ,eps ,rho alpha )cutval (pout ,
14,921
print (cutvalue ",cutval #plot np where (pout == [ xblue xmat[ np where (pout == [ xred xmat[ plt scatter xblue [, xblue [, ="blue"plt scatter (xred [, xred [, ="red"cutvalue figure division of the data in figure into two clustersvia the cross-entropy method principal component analysis (pcathe main idea of principal component analysis (pcais to reduce the dimensionality of data set consisting of many variables pca is feature reduction (or feature extractionmechanismthat helps us to handle high-dimensional data with more features than is convenient to interpret principal component analysis motivationprincipal axes of an ellipsoid consider -dimensional normal distribution with mean vector and covariance matrix the corresponding pdf (see ( )is ( - ( ) | rd if we were to draw many iid samples from this pdfthe points would roughly have an ellipsoid patternas illustrated in figure and correspond to the contours of sets of
14,922
points such that xs- cfor some in particularconsider the ellipsoid xs- principal axes singular value decomposition principal components rd ( let bbwhere is for example the (lowercholesky matrix thenas explained in example the ellipsoid ( can also be viewed as the linear transformation of -dimensional unit sphere via matrix moreoverthe principal axes of the ellipsoid can be found via singular value decomposition (svdof (or )see section and example in particularsuppose that an svd of is udv(note that an svd of is then ud uthe columns of the matrix ud correspond to the principal axes of the ellipsoidand the relative magnitudes of the axes are given by the elements of the diagonal matrix if some of these magnitudes are small compared to the othersa reduction in the dimension of the space may be achieved by projecting each point rd onto the subspace spanned by the main (say dcolumns of -the so-called principal components suppose without loss of generality that the first principal components are given by the first columns of uand let uk be the corresponding matrix with respect to the standard basis {ei }the vector xd ed is represented by the -dimensional vector [ xd ]with respect to the orthonormal basis {ui formed by the columns of matrix uthe representation of is ux similarlythe projection of any point onto the subspace spanned by the first principal vectors is represented by the -dimensional vector > xwith respect to the orthonormal basis formed by the columns of uk sothe idea is that if point lies close to its projection uk > xwe may represent it via numbers instead of dusing the combined features given by the principal components see section for review of projections and orthonormal bases example (principal componentsconsider the matrix which can be written as bbwith figure depicts the ellipsoid xs- which can be obtained by linearly transforming the points on the unit sphere by means of the matrix the principal axes and sizes of the ellipsoid are found through singular value decomposition udvwhere and are - - and -
14,923
the columns of show the directions of the principal axes of the ellipsoidand the diagonal elements of indicate the relative magnitudes of the principal axes we see that the first principal component is given by the first column of uand the second principal component by the second column of the projection of the point [ ]onto the -dimensional space spanned by the first principal component [ ]is > [ ]with respect to the basis vector is represented by the number > that isz figure "surfboardellipsoid where one principal axis is significantly larger than the other two pca and singular value decomposition (svdin the setting abovewe did not consider any data set drawn from multivariate pdf the whole analysis rested on linear algebra in principal component analysis (pcawe start with data xn where each is -dimensional pca does not require assumptions how the data were obtainedbut to make the link with the previous sectionwe can think of the data as iid draws from multivariate normal pdf let us collect the data in matrix in the usual waythat isx > xn xn xnd xn the matrix will be the pca' input under this settingthe data consists of points in ddimensional spaceand our goal is to present the data using feature vectors of dimension in accordance with the previous sectionwe assume that underlying distribution of the data has expectation vector in practicethis means that before pca is appliedthe data needs to be centered by subtracting the column mean in every columnxi xi principal component analysis
14,924
where ni= xi we assume from now on that the data comes from general -dimensional distribution with mean vector and some covariance matrix the covariance matrix is by definition equal to the expectation of the random matrix xxand can be estimated from the data xn via the sample average xi xi sn = as is covariance matrixwe may conduct the same analysis for as we did for in the previous section specificallysuppose ud is an svd of and let uk be the matrix whose columns are the principal componentsthat isthe columns of corresponding to the largest diagonal elements in note that we have used instead of to be compatible with the previous section the transformation zi uk > xi maps each vector xi rd (thuswith featuresto vector zi rd lying in the subspace spanned by the columns of uk with respect to this basisthe point zi has representation zi > (uk > xi > xi rk (thus with featuresthe corresponding covariance matrix of the zi is diagonal the diagonal elements { `of can be interpreted as standard deviations of the data (that isthe trace of in the directions of the principal components the quantity `= ` is thus measure for the amount of variance in the data the proportion `/ indicates how much of the variance in the data is explained by the `-th principal component another way to look at pca is by considering the questionhow can we best project the data onto -dimensional subspace in such way that the total squared distance between the projected points and the original points is minimalfrom section we know that any orthogonal projection to -dimensional subspace vk can be represented by matrix uk > where uk [ uk and the { kare orthogonal vectors of length that span vk the above question can thus be formulated as the minimization programmin ,uk = kxi uk > xi ( now observe that kxi uk > xi ( > uk > )(xi uk > xi = = xx kxi xi uk > xi tr( > uu>xi = = = `= { = xx > uuxi xi uc `= = `= where we have used the cyclic property of trace (theorem and the fact that uk > can be written as `= uu>it follows that the minimization problem( is equivalent to the maximization problem max > ( ,uk `=
14,925
this maximum can be at most `= `and is attained precisely when uk are the first principal components of example (singular value decompositionthe following data set consists of independent samples from the three-dimensional gaussian distribution with mean vector and covariance matrix given in example - - - - - - - - - - - - after replacing with its centered versionan svd ud uof xx/ yields the principal component matrix and diagonal matrix - - - - - - and we also observe thatapart from the sign of the first columnthe principal component matrix is similar to that in example likewise for the matrix we see that of the total variance is explained by the first principal component figure shows the projection of the centered data onto the subspace spanned by this principal component figure data from the "surfboardpdf is projected onto the subspace spanned by the largest principal component the following python code was used
14,926
pcadat py import numpy as np np genfromtxt ('pcadat csv 'delimiter =',' shape [ mean(axis = u_ np linalg svd( /nprojected points np outer ( [, , [, ]import matplotlib pyplot as plt from mpl_toolkits mplot import axes fig plt figure (ax fig add_subplot ( projection =' 'ax w_xaxis set_pane_color (( )ax plot( [, [, [, =' 'linewidth = ax scatter ( [, [, [, =' 'ax scatter ( [, [, [, =' 'for in range ( )ax plot ([ [ , [ , ][ [ , , [ , ][ [ , , [ , ]' 'ax set_xlabel (' 'ax set_ylabel (' 'ax set_zlabel (' 'plt show (+ next is an application of pca to fisher' famous iris data setalready mentioned in section and exercise example (pca for the iris data setthe iris data set contains measurements on four features of the iris plantsepal length and widthand petal length and widthfor total of specimens the full data set also contains the species namebut for the purpose of this example we ignore it figure shows that there is significant correlation between the different features can we perhaps describe the data using fewer features by taking certain linear combinations of the original featuresto investigate thislet us perform pcafirst centering the data the following python code implements the pca it is assumed that csv file irisx csv has been made that contains the iris data set (without the species informationpcairis py import seaborn as sns numpy as np np set_printoptions precision = np genfromtxt ('irisx csv ',delimiter =',' shape [
14,927
np mean(xaxis = [ , ,ut ]np linalg svd (( )/nprint (' \ ' )print ('\ diag( ^ ' [, sns kdeplot (zbw = [- - - - - - - - - - ]diag( ^ [ kernel density estimate the output above shows the principal component matrix (which we called uas well as the diagonal of matrix we see that large proportion of the variance /( + + %is explained by the first principal component thusit makes sense to transform each data point to > figure shows the kernel density estimate of the transformed data interestinglywe see two modesindicating at least two clusters in the data - - - - pca-combined data figure kernel density estimate of the pca-combined iris data further reading various information-theoretic measures to quantify uncertaintyincluding the shannon entropy and kullback-leibler divergencemay be found in [ the fisher informationthe prominent information measure in statisticsis discussed in detail in [ akaike' information criterion appeared in [ the em algorithm was introduced in [ and [ gives an in-depth treatment convergence proofs for the em algorithm may be found in [ classical reference on kernel density estimation is [ ]and [ is the main reference for the theta kernel density estimator theory and applications on finite mixture models may be found in [ for more details on clustering applications and algorithms as well as references on data compressionvector quantizationand pattern recognitionwe refer
14,928
to [ useful modification of the -means algorithm is the fuzzy -means algorithmseee [ popular way to choose the starting positions in -means is given by the -means+heuristicintroduced in [ exercises this exercise is to show that the fisher information matrix (thin ( is equal to the matrix (thin ( )in the special case where (th)and under the assumption that integration and differentiation orders can be interchanged quotient rule for differentiation (alet be vector-valued function and real-valued function prove the following quotient rule for differentiation[ (th)/ (th) (th (th (th)th (thth (thth ( thand (thg( thin ( and take expectations with (bnow take (thg( th respect to eth on both sides to show that th ( th - (th- (theth ( thth { (cfinally show that is the zero matrix plot the mixture of ( ) ( )and exp( distributionswith weights / denote the pdfs in exercise by respectively suppose that is simulated via the two-step procedurefirstdraw from { }then draw from fz how likely is it that the outcome of has come from the uniform pdf simulate an iid training set of size from the gamma( distributionand implement the fisher scoring method in example to find the maximum likelihood estimate plot the true and approximate pdfs let { xn be iid data from pdf ( thwith fisher matrix (thexplain whyunder the conditions where ( holdsn st (th: (xi thn = for large has approximately multivariate normal distribution with expectation vector and covariance matrix (th)/ figure shows gaussian kde with bandwidth on the points - and reproduce the plot in python using the same bandwidthplot also the kde for the same databut now with ph( / [-
14,929
- figure the gaussian kde (solid lineis the equally weighted mixture of normal pdfs centered around the data and with standard deviation (dashed for fixed the gaussian kernel function ( - ) ( : pt is the solution to fourier' heat equation ( tf ( ) rt with initial condition ( ( (the dirac function at show this as consequencethe gaussian kde is the solution to the same heat equationbut now with initial condition ( - ni= ( xi this was the motivation for the theta kde [ ]which is solution to the same heat equation but now on bounded interval show that the ward linkage given in ( is equal to dward (ij| |jkxi xj | | carry out the agglomerative hierarchical clustering of example via the linkage method from scipy cluster hierarchy show that the linkage matrices are the same give scatterplot of the datacolor coded into clusters suppose that we have the data tn { xn in and decide to train the twocomponent gaussian mixture model ( ) ( ) exp exp ( thw ps ps where the parameter vector th [ ]belongs to the set th {th [ ]ui rsi isuppose that the training is via the maximum likelihood in ( show that sup ln (xi ththth = in other wordsfind sequence of values for th th such that the likelihood grows without bound how can we restrict the set th to ensure that the likelihood remains bounded
14,930
-dimensional normal random vector (uscan be defined via an affine transformationx / zof standard normal random vector ( id )where / ( / ) in similar waywe can define -dimensional student random vector ta (usvia transformation / zs ( wherez ( id and gammaa are independenta and / ( / ) note that we obtain the multivariate normal distribution as limiting case for (ashow that the density of the ta ( id distribution is given by ! + (( )/ ta ( : kxk (pa) / ( / by the transformation rule ( )it follows that the density of ta (usis given by ta, ( )where ta, ( : / ta ( - / | [hintconditional on sx has ( id /sdistribution (bwe wish to fit tn (usdistribution to given data { xn in rd via the em method we use the representation ( and augment the data with the vector [ ]of hidden variables show that the complete-data likelihood is given by (ts thy ( / ) / ( + )/ - exp(si si ks- / (xi ) / ( / )( | / ( (cshow thatas consequenceconditional on the data and parameter ththe hidden data are mutually independentand ks- / (xi ) ( tthgamma (dat iteration of the em algorithmlet ( (sg( tth( - be the density of the missing datagiven the observed data and the current parameter guess th( - verify that the expected complete-data log-likelihood is given byan na nd (tln ln( pn ln ln |seg ln (ts th + - ks- / (xi ) eg(tln eg(ts = = show that ( - = ( - ( - ( ( - )- / (xi ( - ) ( - ( - eg(tln ps ln ln ( - eg(ts where ps :(ln ) is digamma function
14,931
(efinallyshow that in the -step of the em algorithm th(tis updated from th( - as followspn ( - xi = (tu pn ( - = wi ( - (tw (xi ( )(xi ( ) = and (tis defined implicitly through the solution of the nonlinear equationpn ( - ( -   ( (tln( + + = ln -ps +ps ln + generalization of both the gamma and inverse-gamma distribution is the generalized inverse-gamma distributionwhich has density ( ( /bp/ - (as+ /ss ababs ( where is the modified bessel function of the second kindwhich can be defined as the integral ( - cosh(tcosh(ptdtx ( we write gig(abpto denote that has pdf of the form ( the function has many interesting properties special cases include - / ( - / (xe - / (xe more generallyk satisfies the recursion + (xk - ( (xx ( (ausing the change of variables ez /bshow that - (as+ /sds ab)( /ap/ (blet gig(abpshow that and + abes ab( ab + es - ab( generalized inverse-gamma distribution modified bessel function of the second kind
14,932
scale-mixture in exercise we viewed the multivariate student ta distribution as scale-mixture of the ( id distribution in this exercisewe consider similar transformationbut now / ( sis not divided but is multiplied by with gamma( / / ) / ( where and are independent and bessel distribution (ashowusing exercise that for / id and the random vector has -dimensional bessel distributionwith densityka ( : -( + )/ ( + )/ kxk( - )/ kxk ( - )/ pd/ ( / rd where is the modified bessel function of the second kind given in ( we write bessela ( id random vector is said to have bessela (usdistribution if it can be written in the form ( by the transformation rule ( )its density is given by |ska ( - / ( )special instances of the bessel pdf includeexp( | | ( |xk (xexp(- | | ( exp - (( )/ ) / kd+ (xexp kxk rd ( ) / (( )/ note that is the (scaledpdf of the double-exponential or laplace distribution (bgiven the data { xn in rd we wish to fit bessel pdf to the data by employing the em algorithmaugmenting the data with the vector [ ]of missing data we assume that is known and show that conditional on (and given th)the missing data vector has independent componentswith gig(abi ( )/ )with bi :ks- / (xi ) (cat iteration of the em algorithmlet ( (sg( tth( - be the density of the missing datagiven the observed data and the current parameter guess th( - show that the expected complete-data log-likelihood is given byn (th:eg(tln (ts thbi (thw( - constanti = (twhere bi (thks- / (xi ) and  ( - (th ( - + )/ - ( - :   ( - ( - bi (th( - bi (th ( - )/ bi (th ( (dfrom ( derive the -step of the em algorithm that isshow how th(tis updated from th( -
14,933
consider the ellipsoid { rd xs- in ( let ud ube an svd of show that the linear transformation ud- maps the points on onto the unit sphere { rd kzk figure shows how the centered "surfboarddata are projected onto the first column of the principal component matrix suppose we project the data instead onto the plane spanned by the first two columns of what are and in the representation ax bx of this plane figure suggests that we can assign each feature vector in the iris data set to one of two clustersbased on the value of > xwhere is the first principal component plot the sepal lengths against petal lengths and color the points for which > to which species of iris do these clusters correspond
14,934
egression many supervised learning techniques can be gathered under the name "regressionthe purpose of this is to explain the mathematical ideas behind regression models and their practical aspects we analyze the fundamental linear model in detailand also discuss nonlinear and generalized linear models introduction francis galton observed in an article in that the heights of adult offspring areon the wholemore "averagethan the heights of their parents galton interpreted this as degenerative phenomenonusing the term "regressionto indicate this "return to mediocritynowadaysregression refers to broad class of supervised learning techniques where the aim is to predict quantitative response (outputvariable via function (xof an explanatory (inputvector [ ]consisting of featureseach of which can be continuous or discrete for instanceregression could be used to predict the birth weight of baby (the response variablefrom the weight of the motherher socio-economic statusand her smoking habits (the explanatory variableslet us recapitulate the framework of supervised learning established in the aim is to find prediction function that best guesses what the random output will be for random input vector the joint pdf (xyof and is unknownbut training set {( )(xn yn )is availablewhich is thought of as the outcome of random training set {( )(xn yn )of iid copies of (xyonce we have selected loss function loss( , )such as the squared-error loss loss( , ( ) recall the mnemonic use of "gfor "guess squared-error loss ( then the "bestprediction function is defined as the one that minimizes the risk `(ge loss(yg( )we saw in section that for the squared-error loss this optimal prediction function is the conditional expectation (xe[ xregression risk
14,935
as the squared-error loss is the most widely-used loss function for regressionwe will adopt this loss function in most of this the optimal prediction function ghas to be learned from the training set by minimizing the training loss (yi (xi )) ( ` (gn = learner over suitable class of functions note that in the above definitionthe training set is assumed to be fixed for random training set we will write the training loss as ` (gthe function ggt that minimizes the training loss is the function we use for prediction -the so-called learner when the function class is clear from the contextwe drop the superscript in the notation as we already saw in ( )conditional on xthe response can be written as (xe( )where ( this motivates standard modeling assumption in supervised learningin which the responses yn conditional on the explanatory variables xn xn are assumed to be of the form yi (xi ei nwhere the {ei are independent with ei and var ei for some function and variance the above model is usually further specified by assuming that is completely known up to an unknown parameter vectorthat isyi (xi bei ( while the model ( is described conditional on the explanatory variablesit will be convenient to make one further model simplificationand view ( as if the {xi were fixedwhile the {yi are random for the remainder of this we assume that the training feature vectors {xi are fixed and only the responses are randomthat ist {( )(xn yn )the advantage of the model ( is that the problem of estimating the function from the training data is reduced to the (much simplerproblem of estimating the parameter vector an obvious disadvantage is that functions of the form (bmay not accurately approximate the true unknown gthe remainder of this deals with the analysis of models of the form ( in the important case where the function (bis linearthe analysis proceeds through the class of linear models ifin additionthe error terms {ei are assumed to be gaussianthis analysis can be carried out using the rich theory of normal linear models
14,936
linear regression the most basic regression model involves linear relationship between the response and single explanatory variable in particularwe have measurements ( )(xn yn that lie approximately on straight lineas in figure - - - - figure data from simple linear regression model following the general scheme captured in ( ) simple model for these data is that the {xi are fixed and variables {yi are random such that yi xi ei ( for certain unknown parameters and the {ei are assumed to be independent with expectation and unknown variance the unknown line { ( ( bis called the regression line thuswe view the responses as random variables that would lie exactly on the regression linewere it not for some "disturbanceor "errorterm represented by the {ei the extent of the disturbance is modeled by the parameter the model in ( is called simple linear regression this model can easily be extended to incorporate more than one explanatory variableas follows regression line simple linear regression model definition multiple linear regression model in multiple linear regression model the response depends on -dimensional explanatory vector [ xd ]via the linear relationship bd xd ewhere and var ( multiple linear regression model
14,937
thusthe data lie approximately on -dimensional affine hyperplane bd xd { ( where we define [ bd ]the function ( bis linear in bbut not linear in the feature vector xdue to the constant howeveraugmenting the feature space with the constant the mapping [ ] ( :[ xb becomes linear in the feature space and so ( becomes linear model (see section most software packages for regression include as feature by default note that in ( we only specified the model for single pair (xythe model for the training set {( )(xn yn )is simply that each yi satisfies ( (with xi and that the {yi are independent setting [ yn ]we can write the multiple linear regression model for the training data compactly as xb emodel matrix ( where [ en ]is vector of iid copies of and is the model matrix given by > xn xn xnd > example (multiple linear regression modelfigure depicts realization of the multiple linear regression model yi xi xi ei where ~iid ( / the fixed feature vectors (vectors of explanatory variablesxi [xi xi ] lie in the unit square figure data from multiple linear regression model
14,938
analysis via linear models analysis of data from linear regression model is greatly simplified through the linear model representation ( in this section we present the main ideas for parameter estimation and model selection for general linear model of the form xb ( where is an nx matrixb [ ] vector of parametersand [ en ]an -dimensional vector of independent error termswith ei and var ei note that the model matrix is assumed to be fixedand and are random specific outcome of is denoted by (in accordance with the notation in section note that the multiple linear regression model in ( was defined using different parameterizationin particularthere we used [ bd ]sowhen applying the results in the present section to such modelsbe aware that alsoin this section feature vector includes the constant so that [ xn parameter estimation the linear model xb contains two unknown parametersb and which have to be estimated from the training data to estimate bwe can repeat exactly the same reasoning used in our recurring polynomial regression example as follows for linear prediction function (xxbthe (squared-errortraining loss can be written as ` ( ky xbk and the optimal learner gt minimizes this quantityleading to the least-squares estimate bwhich satisfies the normal equations xx xy ( the corresponding training loss can be taken as an estimate of that isc xb ( to justify the latternote that is the second moment of the model errors ei nin ( and could be estimated via the method of moments (see section using the sample average - kek / ky xbk /nif were known by replacing with its estimatorwe arrive at ( note that no distributional properties of the {ei were used other than ei and var ei the vector : xb is called the vector of residuals and approximates the (unknownvector of model errors the quantity kek ni= is called the residual sum of squares (rssdividing the rss by gives an unbiased estimate of which we call the estimated residual squared error (rse)see exercise residuals residual sum of squares residual squared error
14,939
in terms of the notation given in the summary table for supervised learningwe thus have the (observedtraining data is {xy the function class is the class of linear functions of xthat is { (bx xbb the (squared-errortraining loss is ` ( ( )ky xbk / bwhere argminbr ky xbk the learner gt is given by gt (xx> the minimal training loss is ` (gt ky xb bk / model selection and prediction even if we restrict the learner to be linear functionthere is still the issue of which explanatory variables (featuresto include while including too few features may result in large approximation error (underfitting)including too many may result in large statistical error (overfittingas discussed in section we need to select the features which provide the best tradeoff between the approximation and statistical errorsso that the (expectedgeneralization risk of the learner is minimized depending on how the (expectedgeneralization risk is estimatedthere are number of strategies for feature selection use test data ( that are obtained independently from the training data tto estimate the generalization risk ky gt ( ) via the test loss ( then choose the collection of features that minimizes the test loss when there is an abundance of datapart of the data can be reserved as test datawhile the remaining data is used as training data when there is limited amount of datawe can use cross-validation to estimate the expected generalization risk ky gt ( ) (where is random training set)as explained in section this is then minimized over the set of possible choices for the explanatory variables when one has to choose between many potential explanatory variablestechniques such as regularized least-squares and lasso regression become important such methods offer another approach to model selectionvia the regularization (or homotopypaths this will be the topic of section in the next rather than using computer-intensive techniquessuch as the ones aboveone can use theoretical estimates of the expected generalization risksuch as the in-sample riskaicand bicas in section and minimize this to determine good set of explanatory variables all of the above approaches do not assume any distributional properties of the error terms {ei in the linear modelother than that they are independent with expectation and variance ifhoweverthey are assumed to have normal (gaussiandistribution(that is{ei ~iid ( ))then the inclusion and exclusion of variables can
14,940
be decided by means of hypotheses tests this is the classical approach to model selectionand will be discussed in section as consequence of the central limit theoremone can use the same approach when the error terms are not necessarily normalprovided their variance is finite and the sample size is large finallywhen using bayesian approachcomparison of two models can be achieved by computing their so-called bayes factor (see section all of the above strategies can be thought of as specifications of simple rule formulated by william of occamwhich can be interpreted aswhen presented with competing modelschoose the simplest one that explains the data this age-old principleknown as occam' razoris mirrored in famous quote of einsteinoccam' razor everything should be made as simple as possiblebut not simpler in linear regressionthe number of parameters or predictors is usually reasonable measure of the simplicity of the model cross-validation and predictive residual sum of squares we start by considering the -fold cross-validationalso called leave-one-out crossvalidationfor the linear model ( we partition the data into data setsleaving out precisely one observation per data setwhich we then predict based on the remaining observationssee section for the general case let - denote the prediction for the -th observation using all the data except yi the error in the predictionyi - is called predicted residual -in contrast to an ordinary residualei yi - yi which is the difference between an observation and its fitted value yi gt (xi obtained using the whole sample in this waywe obtain the collection of predicted residuals {yi - }ni= and summarize them through the predicted residual sum of squares (press)press = leave-one-out cross-validation predicted residual press (yi - ) dividing the press by gives an estimate of the expected generalization risk in generalcomputing the press is computationally intensive as it involves training and predicting separate times for linear modelshoweverthe predicted residuals can be calculated quickly using only the ordinary residuals and the projection matrix xxonto the linear space spanned by the columns of the model matrix (see ( )the -th diagonal element pii of the projection matrix is called the -th leverageand it can be shown that pii (see exercise leverage
14,941
theorem press for linear models consider the linear model ( )where the nxp model matrix is of full rank given an outcome [ yn ]of ythe fitted values can be obtained as pywhere xxx(xx)- xis the projection matrix if the leverage value pi :pii for all nthen the predicted residual sum of squares can be written as press = ei pi ! where ei yi yi yi (xb ) is the -th residual proofit suffices to show that the -th predicted residual can be written as yi - ei /( pi let - denote the model matrix with the -th rowxi removedand define - similarly thenthe least-squares estimate for using all but the -th observation is - ( >- - )- >- - writing xx >- - xi > we have by the sherman-morrison formula (xx)- xi > (xx)- ( >- - )- (xx)- > (xx)- xi where > (xx)- xi pi - - xy xi yi combining all these identitieswe have - ( >- - )- >- - (xx)- xi > (xx)- (xy xi yi ( pi (xx)- xi > (xx)- xi pi yi = (xx)- xi yi pi pi - > ( xxi xi (xx)- xi yi = pi pi - > ( xxi (yi xi (xx)- xi ei = = pi pi - it follows that the predicted value for the -th observation is given by by- > - > > (xx)- xi ei pi ei = yi pi pi henceyi - ei pi ei /( pi ei /( pi example (polynomial regression (cont )we return to example where we estimated the generalization risk for various polynomial prediction functions using independent validation data insteadlet us estimate the expected generalization risk via crossvalidation (thus using only the training setand apply theorem to compute the press
14,942
polyregpress py import numpy as np import matplotlib pyplot as plt def generate_data (beta sig ) np random rand( *np arange ( beta reshape ( , sig np random randn ( )return uy np random seed ( beta np array ([[ - - ]]tsig = ** , generate_data (beta ,sig ,nx np ones (( ) maximum number of parameters press np zeros ( + for in range ( , )if np hstack ((xu**( - ))add column to matrix np linalg pinv(xprojection matrix press [knp sum (( /( np diag(preshape ( , )))** plt plotpress [ : ]/nthe press values divided by for the constantlinearquadraticcubicand quartic order polynomial regression models arerespectively and hencethe cubic polynomial regression model has the lowest pressindicating that it has the best predictive performance in-sample risk and akaike information criterion in section we introduced the in-sample risk as measure for the accuracy of the prediction function to recapitulategiven fixed data set with associated response vector and matrix of explanatory variables xthe in-sample risk of prediction function is defined as `in ( :ex loss(yg( ))( where ex signifies that the expectation is taken under different probability modelin which takes the values xn with equal probabilityand given xi the random variable is drawn from the conditional pdf ( xi the difference between the in-sample risk and the training loss is called the optimism for the squared-error losstheorem expresses the expected optimism of learner gt as two times the average covariance between the predicted values and the responses if the conditional variance of the error (xgiven does not depend on xthen the expected in-sample risk of learner gt averaged over all training setshas simple expression
14,943
theorem expected in-sample risk for linear models let be the model matrix for linear modelof dimension if var[ (xx = does not depend on xthen the expected in-sample risk (with respect to the squared-error lossfor random learner gt is given by ex `in (gt ex ` (gt ` ( where `is the irreducible risk proofthe expected optimism isby definitionex [`in (gt ` (gt )whichfor the squared-error lossis equal to ` /nusing exactly the same reasoning as in example note that here ` equation ( is the basis of the following model comparison heuristicestimate the irreducible risk ` via vb using model with relatively high complexity then choose the linear model with the lowest value of ky xb bk vb ( we can also use the akaike information criterion (aicas heuristic for model comparison we discussed the aic in the unsupervised learning setting in section but the arguments used there can also be applied to the supervised caseunder the in-sample model for the data in particularlet (xywe wish to predict the joint density { =xi ( xi ) (zf (xy: = using prediction function ( thfrom family :{ ( th)th rq }where ( thg(xy th: { =xi gi ( thn = note that is the number of parameters (typically larger than for linear model with design matrixfollowing section the in-sample cross-entropy risk in this case is (th:-ex ln ( th)and to approximate the optimal parameter thwe minimize the corresponding training loss ln ( thrtn (th: = the optimal parameter thn for the training loss is thus found by minimizing xln ln ( thn =
14,944
that isit is the maximum likelihood estimate of thb thn argmax th = ln gi (yi thunder the assumption that (thfor some parameter thwe have from theorem that the estimated in-sample generalization risk can be approximated as ln ( thn ex (thn rtn (thn ln = this leads to the heuristic of selecting the learner ( thn with the smallest value of the aicn ln gi (yi thn ( - = example (normal linear modelfor the normal linear model (xbs (see ( ))with -dimensional vector bwe have (yi > ) gi (yi bs exp |{ ps =th so that the aic is ky xb bk ln( pn ln qb ( where ( bb is the maximum likelihood estimate and + is the number of parameters (including for model comparison we may remove the ln( pterm if all the models are normal linear models certain software packages report the aic without the ln term in ( this may lead to sub-optimal model selection if normal models are compared with nonnormal ones categorical features suppose thatas described in the data is given in the form of spreadsheet or data frame with rows and columnswhere the first element of row is the response variable yi and the remaining elements form the vector of explanatory variables > when all the explanatory variables (featurespredictorsare quantitativethen the model matrix can be directly read off from the data frame as the matrix with rows > howeverwhen some explanatory variables are qualitative (categorical)such one-toone correspondence between data frame and model matrix no longer holds the solution is to include indicator or dummy variables
14,945
factorial experiments factors levels linear models with continuous responses and categorical explanatory variables often arise in factorial experiments these are controlled statistical experiments in which the aim is to assess how response variable is affected by one or more factors tested at several levels typical example is an agricultural experiment where one wishes to investigate how the yield of food crop depends on factors such as locationpesticideand fertilizer example (crop yieldthe data in table lists the yield of food crop for four different crop treatments ( strengths of fertilizeron four different blocks (plotstable crop yield for different treatments and blocks treatment block the corresponding data framegiven in table has rows and columnsone column for the crop yield (the response variable)one column for the treatmentwith levels and one column for the blockalso with levels the values and have no quantitative meaning (it does not make sense to take their averagefor example-they merely identify the category of the treatment or block table crop yield data organized as data frame in standard format yield treatment block indicator feature in generalsuppose there are factor (categoricalvariables ur where the jth factor has mutually exclusive levelsdenoted by in order to include these categorical variables in linear modela common approach is to introduce an indicator feature jk { kfor each factor at level thusx jk if the value of factor is and otherwise since { it suffices to consider only of these indicator features for each factor (this prevents the model matrix from being rank deficientfor single response ythe feature vector xis thus row vector of binary variables
14,946
that indicates which levels were observed for each factor the model assumption is that depends in linear way on the indicator featuresapart from an error term that isy pj = = jk { ke{ jk where we have omitted one indicator feature (corresponding to level for each factor for independent responses yn where each yi corresponds to the factor values ui uir let xi jk {ui kthenthe linear model for the data becomes yi pj jk xi jk ei ( = = where the {ei are independent with expectation and some variance by gathering the and { jk into vector band the {xi jk into matrix xwe have again linear model of the form ( the model matrix has rows and rj= ( columns using the above convention that the parameters are subsumed in the parameter (corresponding to the "constantfeature)we can interpret as baseline response when using the explanatory vector xfor which for all factors the other parameters { jk can be viewed as incremental effects relative to this baseline effect for exampleb describes by how much the response is expected to change if level is used instead of level for factor incremental effects example (crop yield (cont )in example the linear model ( has eight parametersb and the model matrix depends on how the crop yields are organized in vector and on the ordering of the factors let us order column-wise from table as in [ ]and let treatment be factor and block be factor then we can write ( as ewhere { |{zb and with [ ]and [ ]estimation of and model selectionand prediction can now be carried out in the usual manner for linear models in the context of factorial experimentsthe model matrix is often called the design matrixas it specifies the design of the experimente how many replications are taken for each combination of factor levels the model ( can be extended by adding products of indicator variables as new features such features are called interaction terms design matrix interaction
14,947
nested models nested models let be model matrix of the form [ ]where and are model matrices of dimension and ( )respectively the linear models and are said to be nested within the linear model xb this simply means that certain features in are ignored in each of the first two models note that bb and are parameter vectors of dimension pkand krespectively in what followswe assume that and that all model matrices are full-rank suppose we wish to assess whether to use the full model matrix or the reduced model matrix let be the estimate of under the full model (that isobtained via ( ))and let bb denote the estimate of for the reduced model let ( xb be the projection of ( onto the space span(xspanned by the columns of xand let bb be the projection of onto the space span( spanned by the columns of onlysee figure in order to decide whether the features in are neededwe may compare the estimated error terms of the two modelsas calculated by ( )that isby the residual sum of squares divided by the number of observations if the outcome of this comparison is that there is little difference between the model error for the full and reduced modelthen it is appropriate to adopt the reduced modelas it has fewer parameters than the full modelwhile explaining the data just as well the comparison is thus between the squared norms ky ( and ky ( because of the nested nature of the linear modelsspan( is subspace of span(xandconsequentlythe orthogonal projection of ( onto span( is the same as the orthogonal projection of onto span( )that isy ( by pythagorastheoremwe thus have the decomposition ky ( ( ky ( ky ( this is also illustrated in figure ( ( ( ( ( span(xy ( span( figure the residual sum of squares for the full model corresponds to ky - ( and for the reduced model it is ky - ( by pythagoras' theoremthe difference is ky ( - ( the above decomposition can be generalized to more than two model matrices suppose that the model matrix can be decomposed into submatricesx [ xd ]where the matrix xi has pi columns and rowsi thusthe number of columns as alwayswe assume the columns are linearly independent
14,948
in the full model matrix is +*pd this creates an increasing sequence of "nestedmodel matricesx [ ][ xd ]from (saythe baseline normal model matrix to the full model matrix think of each model matrix corresponding to specific variables in the model we follow similar projection procedure as in figure first project onto span(xto yield the vector (dthen project (donto span([ xd- ]to obtain ( - and so onuntil ( is projected onto span( to yield ( (in the case that by applying pythagorastheoremthe total sum of squares can be decomposed as ky ( ky (dk ky (dy ( - ky ( ( { { { { df= - df= - ( df= df=pd software packages typically report the sums of squares as well as the corresponding degrees of freedom (df) ppd degrees of freedom coefficient of determination to assess how linear model xb compares to the default model ewe can compare the variance of the original dataestimated via (yi ) / ky /np yi ) / kb with the variance of the fitted dataestimated via ( /nwhere xb the sum (yi / ky is sometimes called the total sum of squares (tss)and the quantity kb ( ky is called the coefficient of determination of the linear model in the notation of figure ( and ( so that total sum of squares coefficient of determination ky ( ( ky ( ky ( tss rss tss ky ( ky ( note that lies between and an value close to indicates that large proportion of the variance in the data has been explained by the model many software packages also give the adjusted coefficient of determinationor simply the adjusted defined by adjusted ( - np the regular is always non-decreasing in the number of parameters (see exercise )but this may not indicate better predictive power the adjusted compensates for this increase by decreasing the regular as the number of variables increases this heuristic adjustment can make it easier to compare the quality of two competing models adjusted coefficient of determination
14,949
inference for normal linear models so far we have not assumed any distribution for the random vector of errors [ en ]in linear model xb when the error terms {ei are assumed to be normally distributed (that is{ei ~iid ( ))whole new avenues open up for inference on linear models in section we already saw that for such normal linear modelsestimation of and can be carried out via maximum likelihood methodsyielding the same estimators from ( and ( the following theorem lists the properties of these estimators in particularit shows /( pare independent and unbiased estimators of and respectively that and theorem properties of the estimators for normal linear model consider the linear model xb ewith ( in )where is pdimensional vector of parameters and dispersion parameter the following results hold are independent the maximum likelihood estimators and (bs (xx) / kh where rank( - proofusing the pseudo-inverse (definition )we can write the random vector as ywhich is linear transformation of normal random vector consequentlyb has multivariate normal distributionsee theorem the mean vector and covariance matrix follow from the same theoremeb xey xx and cov( bxs in ( ) (xx) are independentdefine ( xb to show that and note that / has (uin distributionwith expectation vector xb/ direct application of theorem now shows that ( ( )/ is independent of ( / since xxb xy ( and ky ( /nit follows that is independent of finallyby the same theorem( the random variable ky / has khn- distributionas ( has the same expectation vector as as corollarywe see that each estimator bi of bi has normal distribution with expect ation bi and variance ui ( ui kui where ui [ ]is the -th unit vectorin other wordsthe variance is [(xx)]ii it is of interest to test whether certain regression parameters bi are or notsince if bi the -th explanatory variable has no direct effect on the expected response and so could be removed from the model standard procedure is to conduct hypothesis test (see section for review of hypothesis testingto test the null hypothesis bi
14,950
against the alternative bi using the test statistic tb bi /ku> xk rse ( where rse is the residual squared errorthat is rse rss/( pthis test statistic has tn- distribution under to see thiswrite zv/( )with zb bi sku> xk / and thenby theorem ( under kh - and and are independent the result now follows directly from corollary comparing two normal linear models suppose we have the following normal linear model for data [ yn ] + { ( in )( xb where and are unknown vectors of dimension and krespectivelyand and are full-rank model matrices of dimensions and ( )respectively above we implicitly defined [ and [ > > suppose we wish to test the hypothesis against following section the idea is to compare the residual sum of squares for both modelsexpressed as ky ( and ky ( using pythagorastheorem we saw that ky ( ky ( ky ( ( and so it makes sense to base the decision whether to retain or reject on the basis of the quotient of ky ( ( and ky ( this leads to the following test statistics theorem test statistic for comparing two normal linear models for the model ( )let ( and ( be the projections of onto the space spanned by the columns of and the columns of respectively then under the test statistic ky ( ( /( kt( ky ( /( phas an ( kn pdistribution proofdefine : / with expectation :xb/sand : / with expectation kp note that andunder uk we can directly apply theorem to find that ky ( / kx kh - andunder ky ( ( / kx xk kh - moreoverthese random variables are independent of each other the proof is completed by applying theorem
14,951
note that is rejected for large values of the testing procedure thus proceeds as follows compute the outcomet sayof the test statistic in ( evaluate the -value ( )with ( kn reject if this -value is too smallsay less than for nested models [ xi ] das in section the test statistic in theorem can now be used to test whether certain xi are needed or not in particularsoftware packages will report the outcomes of fi analysis of variance ky (iy ( - /pi ky (dk /( ( in the order under the null hypothesis that (iand ( - have the same expectation (that isadding xi to xi- has no additional effect on reducing the approximation error)the test statistic fi has an (pi pdistributionand the corresponding -values quantify the strength of the decision to include an additional variable in the model or not this procedure is called analysis of variance (anovanote that the output of an anova table depends on the order in which the variables are considered example (crop yield (cont )we continue examples and decompose the linear model as |{ |{zc |{ |{ { |{zb is the crop yield dependent on treatment levels as well as blockswe first test whether we can remove block as factor in the model against it playing significant role in explaining the crop yields specificallywe test versus using theorem now the vector ( is the projection of onto the ( )-dimensional space spanned by the columns of [ ]and ( is the projection of onto the ( )-dimensional space spanned by the columns of :[ the test statistict sayunder has an ( distribution the python code below calculates the outcome of the test statistic and the corresponding -value we find which gives -value - this shows that the block effects are extremely important for explaining the data using the extended model (including the block effects)we can test whether or notthat iswhether the treatments have significant effect on the crop yield in the presence of the block factor this is done in the last six lines of the code below the outcome of
14,952
the test statistic is with -value of by including the block effectswe effectively reduce the uncertainty in the model and are able to more accurately assess the effects of the treatmentsto conclude that the treatment seems to have an effect on the crop yield closer look at the data shows that within each block (rowthe crop yield roughly increases with the treatment level crop py import numpy as np from scipy stats import from numpy linalg import lstsq norm yy np array ([ ]reshape ( , nrow ncol yy shape [ yy shape [ nrow ncol yy reshape ( ,x_ np ones (( , )km np kron(np eye(ncol),np ones (nrow , ))km [, x_ km [, ncolim np eye(nrowc im [, nrowx_ np vstack ((cc)x_ np vstack ((x_ )x_ np vstack ((x_ ) np hstack ((x_ ,x_ ) np hstack (( ,x_ ) shape [ number of parameters in full model betahat lstsq (xyrcond =none)[ estimate under the full model ym betahat x_ np hstack ((x_ x_ )omitting the block effect x_ shape [ number of parameters in reduced model betahat_ lstsq (x_ yrcond =none)[ y_ x_ betahat_ t_ =( - )/( - )*norm( -y_ )** norm( -ym)** )/norm( -ym)** pval_ cdf(t_ , - , -px_ np hstack ((x_ x_ )omitting the treatment effect x_ shape [ number of parameters in reduced model betahat_ lstsq (x_ yrcond =none)[ y_ x_ betahat_ t_ =( - )/( - )*norm( -y_ )** norm( -ym)** )/norm( -ym)** pval_ cdf(t_ , - , -
14,953
confidence and prediction intervals as in all supervised learning settingslinear regression is most useful when we wish to predict how new response variable will behave on the basis of new explanatory vector for exampleit may be difficult to measure the response variablebut by knowing the estimated regression line and the value for xwe will have reasonably good idea what or the expected value of is going to be thusconsider new and let (xbs )with and unknown first we are going to look at the expected value of ythat is ey xb since is unknownwe bwhere do not know ey either howeverwe can estimate it via the estimator > (bs ( )by theorem being linear in the components of by therefore has normal distribution with expectation xb and variance kxxk let ( be the standardized version of and ky xb bk / kh - then the random variable : confidence interval prediction interval xbkxxk ( > /( pky xb bk ( ( hasby corollary tn- distribution after rearranging the identity (| tn- ; - / awhere tn- ; - / is the ( / quantile of the tn- distributionwe arrive at the stochastic confidence interval > +tn- ; - / rse kxxk( where we have identified ky xb bk /( pwith rse this confidence interval quantifies the uncertainty in the learner (regression surfacea prediction interval for new response is different from confidence interval for ey here the idea is to construct an interval such that lies in this interval with certain guaranteed probability note that now we have two sources of variation (xbs itself is random variable estimating xb via brings another source of variation we can construct ( aprediction intervalby finding two random bounds such that the random variable lies between these bounds with probability we can reason as follows firstlynote that (xbs and (xbs kxxk are independent it follows that has normal distribution with expectation and variance ( kxxk ( secondlyletting ( be the standardized version of yand repeating the steps used for the construction of the confidence interval ( )we arrive at the prediction interval ( > +tn- ; - / rse kxxk this prediction interval captures the uncertainty from an as-yet-unobserved response as well as the uncertainty in the parameters of the regression model itself
14,954
example (confidence limits in simple linear regressionthe following program draws samples from simple linear regression model with parameters [ ]and where the -coordinates are evenly spaced on the interval [ the parameters are estimated in the third block of the code estimates for and are [ ]and respectively the program then proceeds by calculating the numeric confidence and prediction intervals for various values of the explanatory variable figure shows the results confpred py import numpy as np import matplotlib pyplot as plt from scipy stats import from numpy linalg import inv lstsq norm np random seed ( np linspace ( , , reshape ( , parameters beta np array ([ , ]sigma xmat np hstack (np ones (( , )) )design matrix xmat beta sigma *np random randn (nsolve the normal equations betahat lstsq (xmat yrcond =none)[ estimate for sigma sqmse norm( xmat betahat )/np sqrt( - tquant ppf ( , - quantile ucl np zeros (nupper conf limits lcl np zeros (nlower conf limits upl np zeros (nlpl np zeros (nrl np zeros ( (trueregression line for in range ( ) /nxvec np array ([ , ]sqc np sqrt(xvec inv(xmat xmatxvecsqp np sqrt ( xvec inv(xmat xmatxvecrl[ixvec betaucl[ixvec betahat tquant sqmse *sqclcl[ixvec betahat tquant sqmse *sqcupl[ixvec betahat tquant sqmse *sqplpl[ixvec betahat tquant sqmse *sqpplt plot( , 'plt plot( ,rl ,' 'plt plot( ,ucl ,' :'plt plot( ,lcl ,' :'plt plot( ,upl ,' --'plt plot( ,lpl ,' --'
14,955
figure the true regression line (bluesolidand the upper and lower prediction curves (reddashedand confidence curves (dotted nonlinear regression models so far we have been mostly dealing with linear regression modelsin which the prediction function is of the form ( bxb in this section we discuss some strategies for handling general prediction functions ( )where the functional form is known up to an unknown parameter vector so the regression model becomes yi (xi bei ( where en are independent with expectation and unknown variance the model can be further specified by assuming that the error terms have normal distribution table gives some common examples of nonlinear prediction functions for data taking values in table common nonlinear prediction functions for one-dimensional data name ( bb bx exponential ab power law ax ab logistic ( ea+bx )- ab weibull exp(- /aab - - polynomial {bk } = = bk the logistic and polynomial prediction functions in table can be readily generalized to higher dimensions for examplefor general second-order polynomial prediction function is of the form ( bb (
14,956
this function can be viewed as second-order approximation to general smooth prediction function ( )see also exercise polynomial regression models are also called response surface models the generalization of the above logistic prediction to rd is ( ( - )- response surface model ( this function will make its appearance in section and later on in and the first strategy for performing regression with nonlinear prediction functions is to extend the feature space to obtain simpler (ideally linearprediction function in the extended feature space we already saw an application of this strategy in example for the polynomial regression modelwhere the original feature was extended to the feature vector [ uu - ]yielding linear prediction function in similar waythe right-hand side of the polynomial prediction function in ( can be viewed as linear function of the extended feature vector ph( [ ]the function ph is called feature map the second strategy is to transform the response variable and possibly also the explanatory variable such that the transformed variables ye are related in simpler (ideally linearway for examplefor the exponential prediction function -bx we have ln ln bxwhich is linear relation between ln and [ ]example (chlorinetable lists the free chlorine concentration (in mg per literin swimming poolrecorded every hours for days simple chemistry-based model for the chlorine concentration as function of time is - where is the initial concentration and is the reaction rate table chlorine concentration (in mg/las function of time (hourshours concentration hours concentration the exponential relationship -bt suggests that log transformation of will result in linear relationship between ln and the feature vector [ ]thusif for some given data ( )(tn yn )we plot ( ln )(tn ln yn )these points should approximately lie on straight lineand hence the simple linear regression model applies the left panel of figure illustrates that the transformed data indeed lie approximately on straight line the estimated regression line is also drawn here the intercept and slope are - and - here the original (non-transformeddata is shown in the right panel of figure along with the fitted curve -bt where exp( and - feature map
14,957
log - - - - figure the chlorine concentration seems to have an exponential decay recall that for general regression problem the learner gt (xfor given training set is obtained by minimizing the training (squared-errorloss ` ( ( ) (yi (xi )) = ( the third strategy for regression with nonlinear prediction functions is to directly minimize ( by any means possibleas illustrated in the next example example (hougen functionin [ the reaction rate of certain chemical reaction is posited to depend on three input variablesquantities of hydrogen -pentane and isopentane the functional relationship is given by the hougen functionyb / where are the unknown parameters the objective is to estimate the model parameters {bi from the dataas given in table table data for the hougen function the estimation is carried out via the least-squares method the objective function to minimize is thus ! xi xi / ` ( ( )yi ( = xi xi xi
14,958
where the {yi and {xi are given in table this is highly nonlinear optimization problemfor which standard nonlinear leastsquares methods do not work well insteadone can use global optimization methods such as ce and sco (see sections and using the ce methodwe found the minimal value for the objective functionwhich is attained at [ ] linear models in python in this section we describe how to define and analyze linear models using python and the data science module statsmodels we encourage the reader to regularly refer back to the theory in the preceding sections of this so as to avoid using python merely as black box without understanding the underlying principles to run the code start by importing the following code snippetimport matplotlib pyplot as plt import pandas as pd import statsmodels api as sm from statsmodels formula api import ols modeling although specifying normal linear model in python is relatively easyit requires some subtlety the main thing to realize is that python treats quantitative and qualitative (that iscategoricalexplanatory variables differently in statsmodelsordinary least-squares linear models are specified via the function ols (short for ordinary least-squaresthe main argument of this function is formula of the form xd( where is the name of the response variable and xd are the names of the explanatory variables if all variables are quantitativethis describes the linear model yi xi xi bd xid ei ( where xi is the -th explanatory variable for the -th observation and the errors ei are independent normal random variables such that eei and var ei orin matrix formy xb ewith and yn bd en xn xnd for the rest of this sectionwe assume all linear models to be normal
14,959
thusthe first column is always taken as an "interceptparameterunless otherwise specified to remove the intercept termadd - to the ols formulaas in ols(' ~ - 'for any linear modelthe model matrix can be retrieved via the constructionmodel_matrix pd dataframe(model exog,columns=model exog_nameslet us look at some examples of linear models in the first model the variables and are both considered (by pythonto be quantitative mydata pd dataframe ({' [ , , , , , ' [ , , , , , ' [ , , , , , ]}mod ols(" ~ + "datamydata mod_matrix pd dataframe (mod exog columns =mod exog_names print mod_matrix + intercept suppose the second variable is actually qualitativee it represents colorand the levels and stand for redblueand green we can account for such categorical variable by using the astype method to redefine the data type (see section mydata [' 'mydata [' 'astype ('category 'alternativelya categorical variable can be specified in the model formula by wrapping it with (observe how this changes the model matrix mod ols(" ~ + ( )"datamydata mod _matrix pd dataframe (mod exog columns =mod exog_names print mod _matrix intercept ( )[ ( )[ thusif statsmodels formula of the form ( contains factor (qualitativevariablesthe model is no longer of the form ( )but contains indicator variables for each level of the factor variableexcept the first level for the case abovethe corresponding linear model is yi xi {xi {xi ei ( where we have used parameters and to correspond to the indicator features of the qualitative variable the parameter describes how much the response is expected to
14,960
change if the factor switches from level to similar interpretation holds for such parameters can thus be viewed as incremental effects it is also possible to model interaction between two variables for two continuous variablesthis simply adds the products of the original features to the model matrix adding interaction terms in python is achieved by replacing "+in the formula with "*"as the following example illustrates interaction mod ols(" ~ * ( )"datamydata mod _matrix pd dataframe (mod exog columns =mod exog_names print mod _matrix intercept ( )[ ( )[ : ( )[ : ( )[ analysis let us consider some easy linear regression models by using the student survey data set survey csv from the book' github sitewhich contains measurements such as heightweightsexetc from survey conducted among university students suppose we wish to investigate the relation between the shoe size (explanatory variableand the height (response variableof person firstwe load the data and draw scatterplot of the points (height versus shoe size)see figure (without the fitted linesurvey pd read_csv ('survey csv 'plt scatter survey shoe survey height plt xlabel ("shoe size"plt ylabel (height "we observe slight increase in the height as the shoe size increasesalthough this relationship is not very distinct we analyze the data through the simple linear regression model yi xi ei in statsmodels this is performed via the ols method as followsmodel ols(height ~shoe"datasurvey define the model fit model fit (#fit the model defined above fit params print (fit params intercept shoe dtype float the above output gives the least-squares estimates of and for this examplewe have and figure which includes the regression linewas obtained as follows
14,961
height shoe size figure scatterplot of height (cmagainst shoe size (cm)with the fitted line plt plotsurvey shoe survey shoeplt scatter survey shoe survey height plt xlabel ("shoe size"plt ylabel (height "although ols performs complete analysis of the linear modelnot all its calculations need to be presented summary of the results can be obtained with the method summary print (fit summary ()dep variable height rsquared model ols adj rsquared method least squares fstatistic no observations prob (fstatistic ) - df residuals log likelihood - df model aic covariance typenonrobust bic ====================================================================coef std err >| [ intercept shoe ====================================================================omnibus durbin watson probomnibus ) jarque -bera (jb ) skew- prob(jb ) kurtosis cond no the main output items are the followingcoefestimates of the parameters of the regression line std errorstandard deviations of the estimators of the regression line these are the square roots of the variances of the { bi obtained in (
14,962
trealization of student' test statistics associated with the hypotheses bi and bi in particularthe outcome of in ( >| | -value of student' test (two-sided test[ ] confidence intervals for the parameters -squaredcoefficient of determination (percentage of variation explained by the regression)as defined in ( adj -squaredadjusted (explained in section -statisticrealization of the test statistic ( associated with testing the full model against the default model the associated degrees of freedom (df model and df residuals - are givenas is the -valueprob ( -statistic aicthe aic number in ( )that isminus two times the log-likelihood plus two times the number of model parameters (which is here you can access all the numerical values as they are attributes of the fit object first check which names are availableas indir(fitthen access the values via the dot construction for examplethe following extracts the -value for the slope fit pvalues [ - the results show strong evidence for linear relationship between shoe size and height (ormore accuratelystrong evidence that the slope of the regression line is not zero)as the -value for the corresponding test is very small ( - the estimate of the slope indicates that the difference between the average height of students whose shoe size is different by one cm is cm only of the variability of student height is explained by the shoe size we therefore need to add other explanatory variables to the model (multiple linear regressionto increase the model' predictive power analysis of variance (anovawe continue the student survey example of the previous sectionbut now add an extra variableand also consider an analysis of variance of the model instead of "explainingthe student height via their shoe sizewe include weight as an explanatory variable the corresponding ols formula for this model is height~shoe weight
14,963
meaning that each random heightdenoted by heightsatisfies height shoe weight ewhere is normally distributed error term with mean and variance thusthe model has parameters before analyzing the model we present scatterplot of all pairs of variablesusing scatter_matrix height model ols(height ~shoeweight "datasurvey fit model fit (axes pd plotting scatter_matrix survey [['height ','shoe ','weight ']]plt show ( shoe weight height weight shoe figure scatterplot of all pairs of variablesheight (cm)shoe (cm)and weight (kgas for the simple linear regression model in the previous sectionwe can analyze the model using the summary method (below we have omitted some output)fit summary (dep variable model method no observations df residuals df model height ols least squares rsquared adj rsquared fstatistic prob (fstatistic ) - log likelihood - aic bic =====================================================================coef std err >| [ intercept
14,964
shoe weight the -statistic is used to test whether the full model (here with two explanatory variablesis better at "explainingthe height than the default model the corresponding null hypothesis is the assertion of interest is at least one of the coefficients is significantly different from zero given the result of this test ( -value - )we can conclude that at least one of the explanatory variables is associated with height the individual student tests indicate thatshoe size is linearly associated with student heightafter adjusting for weightwith -value at the same weightan increase of one cm in shoe size corresponds to an increase of cm in average student heightweight is linearly associated with student heightafter adjusting for shoe size (the -value is actually - the reported value of should be read as "less than "at the same shoe sizean increase of one kg in weight corresponds to an increase of cm in average student height further understanding is extracted from the model by conducting an analysis of variance the standard statsmodels function is anova_lm the input to this function is the fit object (obtained from model fit()and the output is dataframe object table sm stats anova_lm (fitprint table shoe weight residual df sum_sq mean_sq nan pr(> - - nan the meaning of the columns is as follows df the degrees of freedom of the variablesaccording to the sum of squares decomposition ( as both shoe and weight are quantitative variablestheir degrees of freedom are both (each corresponding to single column in the overall model matrixthe degrees of freedom for the residuals is sum sqthe sum of squares according to ( the total sum of squares is the sum of all the entries in this column the residual error in the model that cannot be explained by the variables is rss mean sqthe sum of squares divided by their degrees of freedom note that the residual square error rse rss/( is an unbiased estimate of the model variance see section fthese are the outcomes of the test statistic ( pr(> )these are the -values corresponding to the test statistic in the preceding column and are computed using an distribution whose degrees of freedom are given in the df column
14,965
the anova table indicates that the shoe variable explains reasonable amount of the variation in the modelas evidenced by sum of squares contribution of out of + and very small -value after shoe is included in the modelit turns out that the weight variable explains even more of the remaining variabilitywith an even smaller -value the remaining sum of squares ( is of the total sum of squaresyielding reductionin accordance with the value reported in the summary for the ols method as mentioned in section the order in which the anova is conducted is important to illustrate thisconsider the output of the following commands model ols(height weight +shoe"datasurvey fit model fit (table sm stats anova_lm (fitprint table weight shoe residual df sum_sq mean_sq nan pr(> - - nan we see that weight as single model variable explains much more of the variability than shoe did if we now also include shoewe only obtain small (but according to the -value still significantreduction in the model variability confidence and prediction intervals in statsmodels method for computing confidence or prediction intervals from dictionary of explanatory variables is get_prediction it simply executes formula ( or ( simpler version is predictwhich only returns the predicted value continuing the student survey examplesuppose we wish to predict the height of person with shoe size cm and weight kg confidence and prediction intervals can be obtained as given in the code below the new explanatory variable is entered as dictionary notice that the prediction interval (for the corresponding random responseis much wider than the confidence interval (for the expectation of the random responsex {'shoe '[ 'weight '[ ]new input dictionary pred fit get_prediction (xpred summary_frame alpha = unstack (mean mean_se mean_ci_lower mean_ci_upper obs_ci_lower obs_ci_upper dtype float predicted value lower upper lower upper bound bound bound bound for for for for ci ci pi pi model validation we can perform an analysis of residuals to examine whether the underlying assumptions of the (normallinear regression model are verified various plots of the residuals can be
14,966
used to inspect whether the assumptions on the errors {ei are satisfied figure gives two such plots the first is scatterplot of the residuals {ei against the fitted valuesb yi when the model assumptions are validthe residualsas approximations of the model errorshould behave approximately as iid normal random variables for each of the fitted valueswith constant variance in this case we see no strong aberrant structure in this plot the residuals are fairly evenly spread and symmetrical about the line (not shownthe second plot is quantile-quantile (or qqplot this is useful way to check for normality of the error termsby plotting the sample quantiles of the residuals against the theoretical quantiles of the standard normal distribution under the model assumptionsthe points should lie approximately on straight line for the current case there does not seem to be an extreme departure from normality drawing histogram or density plot of the residuals will also help to verify the normality assumption the following code was used plt plot(fit fittedvalues ,fit resid ,'plt xlabel (fitted values "plt ylabel (residuals "sm qqplot (fit resid sample quantiles residuals fitted values theoretical quantiles figure leftresiduals against fitted values righta qq plot of the residuals neither shows clear evidence against the model assumptions of constant variance and normality variable selection among the large number of possible explanatory variableswe wish to select those which best explain the observed responses by eliminating redundant explanatory variableswe reduce the statistical error without increasing the approximation errorand thus reduce the (expectedgeneralization risk of the learner in this sectionwe briefly present two methods for variable selection they are illustrated on few variables from the data set birthwt discussed in section the data set contains information on the birth weights (massesof babiesas well as various characteristics of the mothersuch as whether she smokesher ageetc we wish to explain the child' weight at birth using various characteristics of the motherher family historyand her behavior during pregnancy the response variable is weight at birth (quantitative variable bwtexpressed in grams)the explanatory variables are given below
14,967
the data can be obtained as explained in section or from statsmodels in the following waybwt sm datasets get_rdataset (birthwt ","mass"data here is some information about the explanatory variables that we will investigate agemother' age in years lwtmother' weight in lbs racemother' race ( white black othersmokesmoking status during pregnancy ( no yesptlno of previous premature labors hthistory of hypertension ( no yesuipresence of uterine irritability ( no yesftvno of physician visits during first trimester bwtbirth weight in grams we can see the structure of the variables via bwt info(check yourself that all variables are defined as quantitative (int howeverthe variables racesmokehtand ui should really be interpreted as qualitative (factorsto fix thiswe could redefine them with the method astypesimilar to what we did in alternativelywe could use the (construction in statsmodels formula to let the program know that certain variables are factors we will use the latter approach for binary features it does not matter whether the variables are interpreted as factorial or numerical as the numerical and summary results are identical we consider the explanatory variables lwtageuismokehtand two recoded binary variables ftv and ptl we define ftv if there was at least one visit to physicianand ftv otherwise similarlywe define ptl if there is at least one preterm birth in the family historyand ptl otherwise ftv (bwt['ftv '>= astype (intptl (bwt['ptl '>= astype (int forward selection forward selection and backward elimination the forward selection method is an iterative method for variable selection in the first iteration we consider which feature is the most significant in terms of its -value in the models bwt~ with {lwtagethis feature is then selected into the model in the second iterationthe feature that has the smallest -value in the models bwt~ + is selectedwhere and so on usually only features are selected that have pvalue of at most the following python program automates this procedure instead of selecting on the -value one could select on the aic or bic value
14,968
forwardselection py import statsmodels api as sm from statsmodels formula api import ols bwt sm datasets get_rdataset (birthwt ","mass"data ftv (bwt['ftv '>= astype (intptl (bwt['ptl '>= astype (intremaining_features {'lwt ''age '' (ui)''smoke '' (ht)''ftv ''ptl 'selected_features [while remaining_features pf [#list of ( value feature for in remaining_features temp selected_features [ftemporary list of features formula 'bwt~'+join(tempfit ols(formula ,data=bwtfit (pvalfit pvalues - if pval pf append (pval , )if pf#if not empty pf sortreverse =true(best_pval best_f pf pop (remaining_features remove best_f print ('feature {with pvalue { }format (best_f best_pval )selected_features append best_f elsebreak feature feature feature feature (uiwith pvalue - (htwith pvalue - lwt with pvalue - smoke with pvalue - in backward elimination we start with the complete model (all features includedand at each stepwe remove the variable with the highest -valueas long as it is not significant (greater than we leave it as an exercise to verify that the order in which the features are removed isageftv and ptl in this caseforward selection and backward elimination result in the same modelbut this need not be the case in general this way of model selection has the advantage of being easy to use and of treating the question of variable selection in systematic manner the main drawback is that variables are included or deleted based on purely statistical criteriawithout taking into account the aim of the study this usually leads to model which may be satisfactory from statistical point of viewbut in which the variables are not necessarily the most relevant when it comes to understanding and interpreting the data in the study of coursewe can choose to investigate any combination of featuresnot just the ones suggested by the above variable selection methods for examplelet us see if the mother' weighther ageher raceand whether she smokes explain the baby' birthweight backward elimination
14,969
formula 'bwt~lwt+age+ (race)smoke bwt_model ols(formula data=bwtfit (print bwt_model summary ()ols regression results =====================================================================dep variable bwt rsquared model ols adj rsquared method least squares fstatistic no observations prob (fstatistic ) - df residuals log likelihood - df model aic bic ====================================================================coef std err >| [ intercept (race )[ - - - - (race )[ - - - - smoke - - - - lwt age - - - =====================================================================omnibus durbin watson probomnibus ) jarque -bera (jb ) skew- prob(jb ) kurtosis cond no given the result of fisher' global test given by prob ( -statisticin the summary ( -value - )we can conclude that at least one of the explanatory variables is associated with child weight at birthafter adjusting for the other variables the individual student tests indicate thatthe mother' weight is linearly associated with child weightafter adjusting for ageraceand smoking status ( -value at the same ageraceand smoking statusan increase of one pound in the mother' weight corresponds to an increase of in the average child weight at birththe age of the mother is not significantly linearly associated with child weight at birthwhen mother weightraceand smoking status are already taken into account ( -value )weight at birth is significantly lower for child born to mother who smokescompared to children born to non-smoking mothers of the same ageraceand weightwith -value of (to see thisinspect bwt_model pvaluesat the same ageraceand mother weightthe child' weight at birth is less for smoking mother than for non-smoking motherregarding the interpretation of the variable racewe note that the first level of this categorical variable corresponds to white mothers the estimate of - for (race)[ represents the difference in the child' birth weight between black mothers and white mothers (reference group)and this result is significantly different
14,970
from zero ( -value in model adjusted for the mother' weightageand smoking status interaction we can also include interaction terms in the model let us see whether there is any interaction effect between smoke and age via the model bwt age smoke age smoke in python this can be done as follows (below we have removed some output)formula 'bwt~agesmoke bwt_model ols(formula data=bwtfit (print bwt_model summary ()ols regression results =====================================================================dep variable bwt rsquared model ols adj rsquared method least squares fstatistic no observations prob (fstatistic ) df residuals log likelihood - df model aic bic =====================================================================coef std err >| [ intercept smoke - age agesmoke - - - - we observe that the estimate for (- is significantly different from zero ( -value we therefore conclude that the effect of the mother' age on the child' weight depends on the smoking status of the mother the results on association between mother age and child weight must therefore be presented separately for the smoking and the nonsmoking group for non-smoking mothers (smoke )the mean child weight at birth increases on average by grams for each year of the mother' age this is statistically significantas can be seen from the confidence intervals for the parameters (which does not contain zero)bwt_model conf_int (intercept age smoke agesmoke - - - similarlyfor smoking mothersthere seems to be decrease in birthweightb - but this is not statistically significantsee exercise
14,971
generalized linear models the normal linear model in section deals with continuous response variables -such as height and crop yield -and continuous or discrete explanatory variables given the feature vectors {xi }the responses {yi are independent of each otherand each has normal distribution with mean > bwhere > is the -th row of the model matrix generalized linear models allow for arbitrary response distributionsincluding discrete ones definition generalized linear model generalized linear model in generalized linear model (glmthe expected response for given feature vector [ ]is of the form [ xh(xbactivation function link function ( for some function hwhich is called the activation function the distribution of (for given xmay depend on additional dispersion parameters that model the randomness in the data that is not explained by the inverse of function is called the link function as for the linear model( is model for single pair (xyusing the model simplification introduced at the end of section the corresponding model for whole training set {(xi yi )is that the {xi are fixed and that the {yi are independenteach yi satisfying ( with xi writing [ yn ]and defining as the multivalued function with components hwe have ex (xb)where is the (modelmatrix with rows > > common assumption is that yn come from the same family of distributionse normalbernoullior poisson the central focus is the parameter vector bwhich summarizes how the matrix of explanatory variables affects the response vector the class of generalized linear models can encompass wide variety of models obviously the normal linear model ( is generalized linear modelwith [ xxbso that is the identity function in this casey (xbs ) nwhere is dispersion parameter logistic regression logistic distribution example (logistic regressionin logistic regression or logit modelwe assume that the response variables yn are independent and distributed according to yi ber( ( > ))where here is defined as the cdf of the logistic distribution - large values of > thus lead to high probability that yi and small (negativevalues of > cause yi to be with high probability estimation of the parameter vector from the observed data is not as straightforward as for the ordinary linear modelbut can be accomplished via the minimization of suitable training lossas explained below as the {yi are independentthe pdf of [ yn ]is ( bx[ ( > )]yi [ ( > )] -yi (xi=
14,972
maximizing the log-likelihood ln ( bxwith respect to gives the maximum likelihood estimator of in supervised learning frameworkthis is equivalent to minimizingn ln (yi bxi ln ( bxn = =yi ln ( > ( yi ln( ( > ) = ( by comparing ( with ( )we see that we can interpret ( as the cross-entropy training loss associated with comparing true conditional pdf ( xwith an approximation pdf ( bxvia the loss function lossf ( ) ( bx):ln ( bx- ln (xb( yln( (xb)minimizing ( in terms of actually constitutes convex optimization problem since ln (xbln( - and ln( (xb)-xb ln( - )the cross-entropy training loss ( can be rewritten as  xh ( yi ) > ln -xi rt ( : = we leave it as exercise to show that the gradient rt (band hessian (bof rt (bare given by rt ( (ui yi xi ( = and (bui ( ui xi > = ( respectivelywhere ui : ( > bnotice that (bis positive semidefinite matrix for all values of bimplying the convexity of rt (bconsequentlywe can find an optimal efficientlye via newton' method specificallygiven an initial value for iteratively compute bt bt- - (bt- rt (bt- )( until the sequence is deemed to have convergedusing some pre-fixed convergence criterion figure shows the outcomes of independent bernoulli random variableswhere each success probability( exp(-( )))- depends on and - the true logistic curve is also shown (dashed linethe minimum training loss curve (red lineis obtained via the newton scheme ( )giving estimates bb - and bb the python code is given below
14,973
- - figure logistic regression data (blue dots)fitted curve (red)and true curve (black dashedlogreg py import numpy as np import matplotlib pyplot as plt from numpy linalg import lstsq ( np random rand( - reshape ( , beta np array ([- ]xmat np hstack (np ones (( , )) ) /( np exp(-xmat beta) np random binomial ( , ,nsample size explanatory variables response variables initial guess betat lstsq (xmat xmat),xmat yrcond =none)[ grad np array ([ , ]gradient while (np sum(np abs(grad) - stopping criteria mu /( np exp(-xmat betat )gradient delta (mu yreshape ( , grad np sum(np multiply np hstack (delta delta )),xmat)axis = hessian xmat np diag(np multiply (mu ,( mu))xmat betat betat lstsq ( ,grad rcond =none)[ print betat plt plot( , 'plot data xx np linspace - , , reshape ( , xxmat np hstack (np ones (len(xx, ))xx)yy /( np exp(xxmat beta)plt plot(xx ,yy ,' -'#true logistic curve yy /( np exp(xxmat betat ))plt plot(xx ,yy ,' --'
14,974
further reading an excellent overview of regression is provided in [ and an accessible mathematical treatment of linear regression models can be found in [ for extensions to nonlinear regression we refer the reader to [ practical introduction to multilevel/hierarchical models is given in [ for further discussion on regression with discrete responses (classificationwe refer to and the further reading therein on the important question of how to handle missing datathe classic reference is [ (see also [ ]and modern applied reference is [ exercises following his mentor francis galtonthe mathematician/statistician karl pearson conducted comprehensive studies comparing hereditary traits between members of the same family figure depicts the measurements of the heights of fathers and their adult sons (one son per fatherthe data is available from the book' github site as pearson csv height son (in height father (infigure scatterplot of heights from pearson' data (ashow that sons are on average inch taller than the fathers (bwe could try to "explainthe height of the son by taking the height of his father and adding inch the prediction line (red dashedis given figure the black solid line is the fitted regression line this line has slope less than and demonstrates galton' "regressionto the average find the intercept and slope of the fitted regression line for the simple linear regression modelshow that the values for and that solve the
14,975
equations ( arepn = (xi )(yi ypn = (xi ( xb ( provided that not all xi are the same edwin hubble discovered that the universe is expanding if is galaxy' recession velocity (relative to any other galaxyand is its distance (from that same galaxy)hubble' law states that hdwhere is known as hubble' constant the following are distance (in millions of lightyearsand velocity (thousands of miles per secondmeasurements made on five galactic clusters distance velocity state the regression model and estimate the multiple linear regression model ( can be viewed as first-order approximation of the general model (xe( where var and (xis some known or unknown function of ddimensional vector of explanatory variables to see thisreplace (xwith its first-order taylor approximation around some point and write this as xb express and in terms of and table shows data from an agricultural experiment where crop yield was measured for two levels of pesticide and three levels of fertilizer there are three responses for each combination table crop yields for pesticide and fertilizer combinations fertilizer pesticide low medium high no yes (aorganize the data in standard formwhere each row corresponds to single measurement and the columns correspond to the response variable and the two factor variables
14,976
(blet yi jk be the response for the -th replication at level for factor and level for factor to assess which factors best explain the response variablewe use the anova model yi jk ai gi ei jk ( where ai gi gi define [ua ]give the corresponding model matrix (cnote that the parameters are linearly dependent in this case for examplea - and -( to retain only linearly independent variables consider the -dimensional parameter vector [ua ]find the matrix such that me (dgive the model matrix corresponding to show that for the birthweight data in section there is no significant decrease in birthweight for smoking mothers [hintcreate new variable nonsmoke -smokewhich reverses the encoding for the smoking and non-smoking mothers thenthe parameter in the original model is the same as the parameter in the model bwt age nonsmoke age nonsmoke now find for and see if it contains zero prove ( and ( in the tobit regression model with normally distributed errorsthe response is modeled aszi if ui zi yi (xbs in )ui if zi ui where the model matrix and the thresholds un are given typicallyui suppose we wish to estimate th :(bs via the expectation-maximization methodsimilar to the censored data example let [ yn ]be the vector of observed data (ashow that the likelihood of isy ( thphs (yi > bx ph((ui > )/ ) :yi >ui :yi =ui where ph is the cdf of the ( distribution and phs the pdf of the ( distribution (blet and be vectors that collect all yi ui and yi ui respectively denote the corresponding matrix of predictors by and xrespectively for each observation yi ui introduce latent variable zi and collect these into vector for the same indices collect the corresponding ui into vector show that the complete-data likelihood is given by ky xbk kz xbk { cg(yz thexp ( ps ) / tobit regression
14,977
(cfor the -stepshow thatfor fixed thg( ythy (zi yth)where each (zi ythis the pdf of the ((xb) distributiontruncated to the interval (-ci (dfor the -stepcompute the expectation of the complete log-likelihood ky xbk ekz xbk ln ln( thenderive the formulas for and that maximize the expectation of the complete log-likelihood dowload data set womenwage csv from the book' website this data set is tidied-up version of the women' wages data set from [ the first column of the data (hoursis the response variable it shows the hours spent in the labor force by married women in the we want to understand what factors determine the participation rate of women in the labor force the predictor variables arefeature kidslt kidsge age educ exper nwifeinc expersq table features for the women' wage data set description number of children younger than years number of children older than years age of the married woman number of years of formal education number of years of "work experiencenon-wife incomethat isthe income of the husband the square of experto capture any nonlinear relationships we observe that some of the responses are that issome women did not participate in the labor force for this reasonwe model the data using the tobit regression modelin which the response is given aszi if zi yi (xbs in if zi with th (bs )the likelihood of the data [ yn ]isg( thq :yi > phs (yi xi bx :yi = ph((ui xi )/ )where ph is the standard normal cdf in exercise we derived the em algorithm for maximizing the log-likelihood (awrite down the em algorithm in pseudo code as it applies to this tobit regression
14,978
(bimplement the em algorithm pseudo code in python comment on which factor you think is important in determining the labor participation rate of women living in the usa in the let be projection matrix show that the diagonal elements of all lie in the interval [ in particularfor xxin theorem the leverage value pi :pii satisfies pi for all consider the linear model xb in ( )with being the model matrix and having expectation vector and covariance matrix in suppose that - is the least-squares estimate obtained by omitting the -th observationyi that isx - argmin ( > ) , - be the corresponding fitted value at xi alsoy- > where > is the -th row of let define bi as the least-squares estimator of based on the response data ( :[ yi- - yi+ yn ](aprove that - bi that isthe linear model obtained from fitting all responses except the -th is the same as the one obtained from fitting the data ( (buse the previous result to verify that yi - (yi yi )/( pii )where xxis the projection matrix onto the columns of hencededuce the press formula in theorem take the linear model xb ,where is an model matrixe and cov(es in let xxbe the projection matrix onto the columns of (ausing the properties of the pseudo-inverse (see definition )show that ppp (blet be the (randomvector of residualswhere py show that the -th residual has normal distribution with expectation and variance ( pii (that iss times minus the -th leverage(cshow that can be unbiasedly estimated via : ky yk ky xb bk np np ( [hintuse the cyclic property of the trace as in example consider normal linear model xb ewhere is an model matrix and ( in exercise shows that for any such model the -th standardized residual ei /( pii has standard normal distribution this motivates the use of the leverage pii to assess whether the -th observation is an outlier depending on the size of the -th residual relative to pii more robust approach is to include an estimate for using
14,979
studentized residual all data except the -th observation this gives rise to the studentized residual defined as ei : - pii where - is an estimate of obtained by fitting all the observations except the -th and ei yi yi is the -th (randomresidual exercise shows that we can takefor example ky - -ib - ( - where - is the model matrix with the -th row removedis an unbiased estimator of efficientlyusing in ( )as the latter will typically be we wish to compute - available once we have fitted the linear model to this enddefine ui as the -th unit vector [ ]and let - ( : (yi - )ui ei ui pii where we have used the fact that yi - ei /( pii )as derived in the proof of theorem now apply exercise to prove that - cook' distance ( ps ei /( pii np- using the notation from exercises - cook' distance for observation is defined as (ikb - di :ps it measures the change in the fitted values when the -th observation is removedrelative to the residual variance of the model (estimated via by using similar arguments as those in exercise show that di pii ei ( pii ) it follows that there is no need to "omit and refitthe linear model in order to compute cook' distance for the -th response prove that if we add an additional feature to the general linear modelthen the coefficient of determinationis necessarily non-decreasing in value and hence cannot be used to compare models with different numbers of predictors let :[ xn ]and :[ un ]in the fundamental theorem we use the fact that if xi (ui ) are independentthen kxk has (per definitiona noncentral kh distribution show that kxk has moment generating function etkuk /( - ( ) / / and so the distribution of kxk depends on only through the norm kuk
14,980
carry out logistic regression analysis on (partialwine data set classification problem the data can be loaded using the following code from sklearn import datasets import numpy as np data datasets load_wine ( data data [:[ , ] np array (data target == dtype =np uintx np append (np ones(len( )reshape - , , ,axis = the model matrix has three featuresincluding the constant feature instead of using newton' method ( to estimate bimplement simple gradient descent procedure bt bt- art (bt- )with learning rate and run it for steps your procedure should deliver three coefficientsone for the intercept and the rest for the explanatory variables solve the same problem using the logit method of statsmodels api and compare the results consider again example where we train the learner via the newton iteration ( if :[ xn defines the matrix of predictors and ut : (xbt )then the gradient ( and hessian ( for newton' method can be written as rt (bt (ut yn and (bt xdt xn where dt :diag(ut ( ut )is diagonal matrix show that the newton iteration ( can be written as the iterative reweighted least-squares methodbt argmin ( yt- xb)dt- ( yt- xb)iterative reweighted least squares where yt- :xbt- - - ( ut- is the so-called adjusted response [hintuse the fact that (mm)- mz is the minimizer of kmb zk in multi-output linear regressionthe response variable is real-valued vector of dimensionsaym similar to ( )the model can be written in matrix notatione xb en wherey is an matrix of independent responses (stored as row vectors of length ) is the usual model matrixb is an matrix of model parameterse en rm are independent error terms with and ees multi-output linear regression
14,981
we wish to learn the matrix parameters and from the training set {yxto this endconsider minimizing the training loss tr ( xbs- ( xb) where tr(*is the trace of matrix (ashow that the minimizer of the training lossdenoted bsatisfies the normal equationsxx xy (bnoting that ( xb( xbn ei > = explain why ( xb )( xb bb : is method-of-moments estimator of sjust like the one given in (
14,982
egularization and ernel ethods the purpose of this is to familiarize the reader with two central concepts in modern data science and machine learningregularization and kernel methods regularization provides natural way to guard against overfitting and kernel methods offer broad generalization of linear models herewe discuss regularized regression (ridgelassoas bridge to the fundamentals of kernel methods we introduce reproducing kernel hilbert spaces and show that selecting the best prediction function in such spaces is in fact finite-dimensional optimization problem applications to spline fittinggaussian process regressionand kernel pca are given introduction in this we return to the supervised learning setting of (regressionand expand its scope given training data {( )(xn yn )}we wish to find prediction function (the learnergt that minimizes the (squared-errortraining loss (yi (xi )) ` (gn = within class of functions as noted in if is the set of all possible functions then choosing any function with the property that (xi yi for all will give zero training lossbut will likely have poor generalization performance (that issuffer from overfittingrecall from theorem that the best possible prediction function (over all gfor the squared-error risk ( ( )) is given by (xe[ xthe class should be simple enough to permit theoretical understanding and analysis butat the same timerich enough to contain the optimal function (or function close to gthis ideal can be realized by taking to be hilbert space ( complete inner product spaceof functionssee appendix many of the classes of functions that we have encountered so far are in fact hilbert spaces in particularthe set of linear functions on is hilbert space to see this hilbert space
14,983
complete vector space feature maps rkhs regularization identify with each element the linear function gb xb and define the inner product on as hgb gg :bg in this wayg behaves in exactly the same way as (is isomorphic tothe space equipped with the euclidean inner product (dot productthe latter is hilbert spacebecause it is complete with respect to the euclidean norm see exercise for further discussion let us now turn to our "runningpolynomial regression example where the feature vector [ uu - ]=ph(uis itself vector-valued function of another feature thenthe space of functions hb ph( ) is hilbert spacethrough the identification hb in factthis is true for any feature mapping ph [ph ( )ph ( )]this can be further generalized by considering feature maps ku where each ku is real-valued function ku (von the feature space as we shall soon see (in secp tion )functions of the form = bi kvi (ulive in hilbert space of functions called reproducing kernel hilbert space (rkhsin section we introduce the notion of rkhs formallygive specific examplesincluding the linear and gaussian kernelsand derive various useful propertiesthe most important of which is the representer theorem applications of such spaces include the smoothing splines (section )gaussian process regression (section )kernel pca (section )and support vector machines for classification (section the rkhs formalism also makes it easier to treat the important topic of regularization the aim of regularization is to improve the predictive performance of the best learner in some class of functions by adding penalty term to the training loss that penalizes learners that tend to overfit the data in the next section we introduce the main ideas behind regularizationwhich then segues into discussion of kernel methods in the subsequent sections regularization let be the hilbert space of functions over which we search for the minimizergt of the training loss ` (goftenthe hilbert space is rich enough so that we can find learner gt within such that the training loss is zero or close to zero consequentlyif the space of functions is sufficiently richwe run the risk of overfitting one way to avoid overfitting is to restrict attention to subset of the space by introducing non-negative functional rwhich penalizes complex models (functionsin particularwe want to find functions such that ( thus we can formulate the quintessential supervised learning problem asmin {` (gg (gc( the solution (argminof which is our learner when this optimization problem is convexit can be solved by first obtaining the lagrangian dual function ( :min {` (gl( (gc)gg ridge regression and then maximizing (lwith respect to see section in order to introduce the overall ideas of kernel methods and regularizationwe will proceed by exploring ( in the special case of ridge regressionwith the following running example
14,984
example (ridge regressionridge regression is simply linear regression with squared-norm penalty functional (also called regularization functionor regularizersuppose we have training set {(xi yi ) }with each xi and we use squared-norm penalty with regularization parameter thenthe problem is to solve (yi (xi )) kgk min gg = regularizer regularization parameter ( where is the hilbert space of linear functions on as explained in section we can identify each with vector andconsequentlykgk hbbi kbk the above functional optimization problem is thus equivalent to the parametric optimization problem  yi > kbk ( minp br = whichin the notation of further simplifies to xb kbk br minp ( in other wordsthe solution to ( is of the form xbwhere bsolves ( (or equivalently ( )observe that as the regularization term becomes dominant and consequently the optimal becomes identically zero the optimization problem in ( is convexand by multiplying by the constant / and setting the gradient equal to zerowe obtain (xb yn ( if these are simply the normal equationsalbeit written in slightly different form if the matrix xx gi is invertible (which is the case for any see exercise )then the solution to these modified normal equations is (xx gi )- xy when using regularization with respect to some hilbert space git is sometimes useful to decompose into two orthogonal subspacesh and saysuch that every can be uniquely written as cwith hc cand hhci such is said to be the direct sum of and hand we write decompositions of this form become useful when functions in are penalized but functions in are not we illustrate this decomposition with the ridge regression example where one of the features is constant termwhich we do not wish to penalize example (ridge regression (cont )suppose one of the features in example is the constant which we do not wish to penalize the reason for this is to ensure that when the optimal becomes the "constantmodelg(xb rather than the "zeromodelg( let us alter the notation slightly by considering the feature vectors to be of the form [ ]where [ ]we thus have featuresrather direct sum
14,985
than let be the space of linear functions of each linear function of can be written as xbwhich is the sum of the constant function and : xb moreoverthe two functions are orthogonal with respect to the inner product on hchi [ ][ ] where is column vector of zeros as subspaces of gboth and are again hilbert spacesand their inner products and norms follow directly from the inner product on for exampleeach function in has norm khkh kbkand the constant function in has norm | the modification of the regularized optimization problem ( where the constant term is not penalized can now be written as min (yi ( xi )) kgk ghc = ( xb kbk , ( which further simplifies to min where is the vector of observe thatin this caseas the optimal tends to the sample mean of the {yi }that iswe obtain the "defaultregression modelwithout explanatory variables againthis is convex optimization problemand the solution follows from ( xb yn ( with ( xb( (xx - ) (xn- ) ( this results in solving for from and determining from ( as precursor to the kernel methods in the following sectionslet us assume that and that has full (columnrank then any vector can be written as linear combination of the feature vectors {xi }that isas linear combinations of the columns of the matrix xin particularlet xawhere [ an ]rn in this case ( reduces to (xxn- xxn in ) (in - ) assuming invertibility of (xxn- xxn in )we have the solution (xxn- xxn in )- (in - )ygram matrix which depends on the training feature vectors {xi only through the matrix of inner productsxx[hxi ithis matrix is called the gram matrix of the {xi from ( )the solution for the constant term is bb - ( xx> ait follows that the learner is linear combination of inner products {hxi xiplus constantgt ( xb xx> = = ai hxi xi
14,986
where the coefficients and ai only depend on the inner products {hxi iwe will see shortly that the representer theorem generalizes this result to broad class of regularized optimization problems we illustrate in figure how the solutions of the ridge regression problems appearing in examples and are qualitatively affected by the regularization parameter for simple linear regression model the data was generated from the model yi - xi ei where each xi is drawn independently and uniformly from the interval [ and each ei is drawn independently from the standard normal distribution = : - - - - - - - - - - - - figure ridge regression solutions for simple linear regression problem each panel shows contours of the loss function (log scaleand the effect of the regularization parameter { }appearing in ( and ( top rowboth terms are penalized bottom rowonly the non-constant term is penalized penalized (plusand unpenalized (diamondsolutions are shown in each case the contours are those of the squared-error loss (actually the logarithm thereof)which is minimized with respect to the model parameters and the diamonds all represent the same minimizer of this loss the plusses show each minimizer [ * * ]of the regularized minimization problems ( and ( for three choices of the regularization parameter for the top three panels the regularization involves both and through the squared norm the circles show the points that have the same squared norm as
14,987
the optimal solution for the bottom three panels only is regularizedtherehorizontal lines indicate vectors [ ]for which | | * lasso the problem of ridge regression discussed in example boils down to solving problem of the form in ( )involving squared -norm penalty kbk natural question to ask is whether we can replace the squared -norm penalty by different penalty term replacing it with -norm gives the lasso (least absolute shrinkage and selection operatorthe lasso equivalent of the ridge regression problem ( is thus xb kbk , min where kbk ( pp = |bi this is again convex optimization problem unlike ridge regressionthe lasso generally does not have an explicit solutionand so numerical methods must be used to solve it note that the problem ( is of the form min , (xg(zsubject to ax bz ( with :[ ] :ba :[ ] :- and : (vector of zeros)and convex functions ( : [ xx and ( :gkzk there exist efficient algorithms for solving such problemsincluding the alternating direction method of multipliers (admm[ we refer to example ?for details on this algorithm we repeat the examples from figure but now using lasso regression and taking the square roots of the previous regularization parameters the results are displayed in figure
14,988
: = - - - - - - - - - - - - figure lasso regression solutions compare with figure one advantage of using the lasso regularization is that the resulting optimal parameter vector often has several components that are exactly for examplein the top middle and right panels of figure the optimal solution lies exactly at corner point of the square {[ ]| | | * | * |}in this case * for statistical models with many parametersthe lasso can provide methodology for model selection namelyas the regularization parameter increases (orequivalentlyas the norm of the optimal solution decreases)the solution vector will have fewer and fewer non-zero parameters by plotting the values of the parameters for each or one obtains the so-called regularization paths (also called homotopy paths or coefficient profilesfor the variables inspection of such paths may help assess which of the model parameters are relevant to explain the variability in the observed responses {yi example (regularization pathsfigure shows the regularization paths for coefficients from multiple linear regression model yi xi ei = where for and for the error terms {ei are independent and standard normal the explanatory variables {xi were independently generated from standard normal distribution as it is clear from the figurethe estimates of the regularization paths
14,989
non-zero coefficients are first selectedas the norm of the solutions increases by the time the norm reaches around all variables for which have been correctly identified and the remaining parameters are estimated as exactly only after the norm reaches around will these "spuriousparameters be estimated to be non-zero for this examplethe regularization parameter varied from - to - - norm figure regularization paths for lasso regression solutions as function of the norm of the solutions reproducing kernel hilbert spaces in this sectionwe formalize the idea outlined at the end of section of extending finite dimensional feature maps to those that are functions by introducing special type of hilbert space of functions known as reproducing kernel hilbert space (rkhsalthough the theory extends naturally to hilbert spaces of complex-valued functionswe restrict attention to hilbert spaces of real-valued functions here to evaluate the loss of learner in some class of functions gwe do not need to explicitly construct -ratherit is only required that we can evaluate at all the feature vectors xn of the training set defining property of an rkhs is that function evaluation at point can be performed by simply taking the inner product of with some feature function associated with we will see that this property becomes particularly useful in light of the representer theorem (see section )which states that the learner itself can be represented as linear combination of the set of feature functions { xi nconsequentlywe can evaluate learner at the feature vectors {xi by taking linear combinations of terms of the form (xi hk xi ig collecting these inner products into matrix [ (xi )ij (the gram matrix of the { xi })we will see that the feature vectors {xi only enter the loss minimization problem through
14,990
definition reproducing kernel hilbert space for non-empty set xa hilbert space of functions with inner product **ig is called reproducing kernel hilbert space (rkhswith reproducing kernel ifreproducing kernel hilbert space for every xk : ( *is in (xxfor all for every and gg(xhgk ig the reproducing kernel of hilbert space of functionsif it existsis uniquesee exercise the main (thirdcondition in definition is known as the reproducing property this property allows us to evaluate any function at point by taking the inner product of and as suchk is called the representer of evaluation furtherby taking and applying the reproducing propertywe have hk ig ( )and so by symmetry of the inner product it follows that (xx ( xas consequencereproducing kernels are necessarily symmetric functions moreovera reproducing kernel is positive semidefinite functionmeaning that for every and every choice of an and xn xit holds that ai (xi ( reproducing property positive semidefinite = = in other wordsevery gram matrix associated with is positive semidefinite matrixthat is aka for all the proof is addressed in exercise the following theorem gives an alternative characterization of an rkhs the proof uses the riesz representation theorem also note that in the theorem below we could have replaced the word "boundedwith "continuous"as the two are equivalent for linear functionalssee theorem theorem continuous evaluation functionals characterize rkhs an rkhs on set is hilbert space in which every evaluation functional (xis bounded converselya hilbert space of functions for which every evaluation functional is bounded is an rkhs proofnote thatsince evaluation functionals are linear operatorsshowing boundedness is equivalent to showing continuity given an rkhs with reproducing kernel ksuppose that we have sequence gn converging to gthat is kgn gkg we apply the cauchy-schwarz inequality (theorem and the reproducing property of to find that for every and any np | gn |gn (xg( )|hgn gk ig kgn gkg kk kg kgn gkg hk ig kgn gkg (xxnoting that (xxby definition for every xand that kgn gkg as we have shown continuity of that is | gn as for every evaluation functional
14,991
converselysuppose that evaluation functionals are bounded then from the riesz representation theorem there exists some gdx such that hggdx ig for all -the representer of evaluation if we define (xx gdx ( for all xx xthen : ( *gdx is an element of for every and hgk ig ( )so that the reproducing property in definition is verified the fact that an rkhs has continuous evaluation functionals means that if two functions gh are "closewith respect to kg then their evaluations ( ) (xare close for every formallyconvergence in kg norm implies pointwise convergence for all the following theorem shows that any finite function can serve as reproducing kernel as long as it is finitesymmetricand positive semidefinite the corresponding (unique!rkhs is the completion of the set of all functions of the form pn = ai xi where ai for all theorem moore-aronszajn given non-empty set and any finite symmetric positive semidefinite function rthere exists an rkhs of functions with reproducing kernel moreoverg is unique proof(sketchas the proof of uniqueness is treated in exercise the objective is to prove existence the idea is to construct pre-rkhs from the given function that has the essential structure and then to extend to an rkhs in particulardefine as the set of finite linear combinations of functions xn : ai xi xn xai rn = define on the following inner producth fgig :* = ai xi = : ai (xi = = then is an inner product space in factg has the essential structure we requirenamely that (ievaluation functionals are bounded/continuous (exercise and (iicauchy sequences in that converge pointwise also converge in norm (see exercise we then enlarge to the set of all functions for which there exists cauchy sequence in converging pointwise to and define an inner product on as the limit fgig :lim fn gn ig ( nwhere fn and gn to show that is an rkhs it remains to be shown that ( this inner product is well defined( evaluation functionals remain boundedand ( the space is complete detailed proof is established in exercises and
14,992
construction of reproducing kernels in this section we describe various ways to construct reproducing kernel for some feature space recall that needs to be finitesymmetricand positive semidefinite function (that isit satisfies ( )in view of theorem specifying the space and reproducing kernel corresponds to uniquely specifying an rkhs reproducing kernels via feature mapping perhaps the most fundamental way to construct reproducing kernel is via feature map ph we define (xx :hph( )ph( )iwhere denotes the euclidean inner product the function is clearly finite and symmetric to verify that is positive semidefinitelet ph be the matrix with rows ph( )ph(xn )and let [ an ]rn thenn = = ai (xi = = ai ph(xi ph( aphpha kphak example (linear kerneltaking the identity feature map ph(xx on gives the linear kernel (xx hxx xx linear kernel as can be seen from the proof of theorem the rkhs of functions corresponding to the linear kernel is the space of linear functions on this space is isomorphic to itselfas discussed in the introduction (see also exercise it is natural to wonder whether given kernel function corresponds uniquely to feature map the answer is noas we shall see by way of example example (feature maps and kernel functionslet andconsider feature maps ph and ph with ph ( : and ph ( :[xx] then kph (xx hph ( )ph ( ) xx but also kph (xx hph ( )ph ( ) xx thuswe arrive at the same kernel function defined for the same underlying set via two different feature maps kernels from characteristic functions another way to construct reproducing kernels on makes use of the properties of characteristic functions in particularwe have the following result we leave its proof as exercise
14,993
theorem reproducing kernel from characteristic function let be an -valued random vector that is symmetric about the origin (that isx and - arer identically distributed)and let ps be its characteristic functionps(te eit eit (dxfor then (xx :ps( is valid reproducing kernel on example (gaussian kernelthe multivariate normal distribution with mean vector and covariance matrix is clearly symmetric around the origin its characteristic function is ps(texp ktk gaussian kernel taking / this gives the popular gaussian kernel on kx (xx exp bandwidth radial basis function (rbfkernel ( the parameter is sometimes called the bandwidth note that in the machine learning literaturethe gaussian kernel is sometimes referred to as "theradial basis function (rbfkernel from the proof of theorem we see that the rkhs determined by the gaussian kernel is the space of pointwise limits of functions of the form kx xi (xai exp = we can think of each point xi having feature xi that is scaled multivariate gaussian pdf centered at xi example (sinc kernelthe characteristic function of uniform[- random variable (which is symmetric around is ps(tsinc( :sin( )/tso (xx sinc( - is valid kernel inspired by kernel density estimation (section )we may be tempted to use the pdf of random variable that is symmetric about the origin to construct reproducing kernel howeverdoing so will not work in generalas the next example illustrates example (uniform pdf does not construct valid reproducing kerneltake the function ps( {| }which is the pdf of uniform[- unfortunatelythe function (xx ps( is not positive semidefiniteas can be seen for example by constructing the matrix [ (ti )ij for the points and as followsps( ps(- ps(- ps( ps(- ps( ps( ps( ps(
14,994
the eigenvalues of are { / / / / / {- and so by theorem is not positive semidefinite matrixsince it has negative eigenvalue consequentlyk is not valid reproducing kernel one of the reasons why the gaussian kernel ( is popular is that it enjoys the universal approximation property [ ]the space of functions spanned by the gaussian kernel is dense in the space of continuous functions with support naturallythis is desirable property especially if there is little prior knowledge about the properties of ghowevernote that every function in the rkhs associated with gaussian kernel is infinitely differentiable moreovera gaussian rkhs does not contain non-zero constant functions indeedif is non-empty and openthen the only function of the form (xc { acontained in is the zero function ( consequentlyif it is known that is differentiable only to certain orderone may prefer the matern kernel with parameters ns kn (xx  - kx / kn kx / (nmatern kernel reproducing kernels using orthonormal features we have seen in sections and how to construct reproducing kernels from feature maps and characteristic functions another way to construct kernels on space is to work directly from the function class (xu)that isthe set of square-integrable functions on with respect to usee also definition for simplicityin what followswe will consider to be the lebesgue measureand will simply write (xrather than (xuwe will also assume that let { be an orthonormal basis of (xand let be sequence of positive numbers as discussed in section the kernel corresponding to feature map pp ph is (xx ph( )ph( = phi (xphi ( now consider (possibly infinitesequence of feature functions phi ci xi and define (xx :phi (xphi ( li xi (xxi ( )( > universal approximation property ( which gives functions that are (weaklydifferentiable to order bnc (but not necessarily to order dneherekn denotes the modified bessel function of the second kindsee ( the particular form of the matern kernel appearing in ( ensures that limnkn (xx (xx )where is the gaussian kernel appearing in ( we remark that sobolev spaces are closely related to the matern kernel up to constants (which scale the unit ball in the space)in dimension and for parameter / these - spaces can be identified with ps( (sktk - / / - (ktk)which in turn can be viewed as the characteristic function corresponding to the (radially symmetricmultivariate student' distribution with degrees of freedomthat iswith pdf ( ( kxk )- > the term radial basis function is sometimes used more generally to mean kernels of the form (xx (kx kfor some function function is said to be square-integrable if (xu(dxwhere is measure on
14,995
where li this is well-defined as long as > li which we assume from now on let be the linear space of functions of the form > ai xi where > fxi ixi we > ai /li as every function (xcan be represented as see that is linear subspace of (xon define the inner product fxi ihgxi fgih : > li with this inner productthe squared norm of > ai xi is > /li we show that is actually an rkhs with kernel by verifying the conditions of definition firstx kx li xi (xxi hi> as li by assumptionand so is finite secondthe reproducing property holds namelylet > ai xi thenp hk ih hk xi ih fxi > li li xi (xai > li ai xi (xf (xi> the discussion above demonstrates that kernels can be constructed via ( in fact(under mild conditionsany given reproducing kernel can be written in the form ( )where this series representation enjoys desirable convergence properties this result is known as mercer' theoremand is given below we leave the full proof including the precise conditions toe [ ]but the main idea is that reproducing kernel can be thought of as generalization of positive semidefinite matrix kand can also be written in spectral form (see also section in particularby theorem we can write vdvwhere is matrix of orthonormal eigenvectors [vand the diagonal matrix of the (positiveeigenvalues [ ]that isx (ijlv(ivj`> in ( belowxx play the role of ijand xplays the role of vtheorem mercer let be reproducing kernel for compact set then (under mild conditionsthere exists countable sequence of non-negative numbers {ldecreasing to zero and functions {xorthonormal in (xsuch that (xx lx(xx( for all xx ( `> where ( converges absolutely and uniformly on furtherif then (lxis an (eigenvalueeigenfunctionpair for the integral operator (xl (xdefined by [ ]( : (xyf (ydy for
14,996
theorem holds if (ithe kernel is continuous on (iithe function ( : (xxdefined for is integrable extensions of theorem to more general spaces and measures holdseee [ or [ the key importance of theorem lies in the fact that the series representation ( converges absolutely and uniformly on xxx the uniform convergence is much stronger condition than pointwise convergenceand means for instance that properties of the sequence of partial sumssuch as continuity and integrabilityare transferred to the limit example (mercersuppose [- and the kernel is (xx xx which corresponds to the rkhs of affine functions from to find the (eigenvalueeigenfunctionpairs for the integral operator appearing in theorem we need to find numbers {land orthonormal functions { ( )that solve - ( xx ( dx lx(xfor all [- consider first constant function (xc thenr for all [- ]we have that cand the normalization condition requires that - dx togetherthese give and +- nextconsider an affine function (xa bx orthogonality requires that - ( bxdx which implies (since moreoverthe normalization condition then requires - dx orequivalently / implying + / finallythe integral equation reads - ( xx bx dx bx = bx bx implying that / we take the positive solutions ( and )and note that (xx ( (xx ( xx (xx ) and so we have found the decomposition appearing in ( as an asideobserve that and are orthonormal versions of the first two legendre polynomials the corresponding feature map can be explicitly identified as ph (xl ( and ph (xl (xx kernels from kernels the following theorem lists some useful properties for constructing reproducing kernels from existing reproducing kernels
14,997
theorem rules for constructing kernels from other kernels if is reproducing kernel and ph is functionthen (ph( )ph( )is reproducing kernel from if is reproducing kernel and ris functionthen ( ) (xx ( is also reproducing kernel from if and are reproducing kernels from rthen so is their sum if and are reproducing kernels from rthen so is their product if and are reproducing kernels from and respectivelythen ((xy)( ): (xx (yy and kx ((xy)( ): (xx ) (yy are reproducing kernels from ( yx ( yr prooffor rules and it is easy to verify that the resulting function is finitesymmetricand positive semidefiniteand so is valid reproducing kernel by theorem for examplefor rule we have ni= nj= ai (yi ) for every choice of {ai }ni= and {yi }ni= since is reproducing kernel in particularit holds true for yi ph(xi ) rule is easy to show for kernels that admit representation of the form ( )since ( ( ( ( (xx (xx phi (xphi ( ph (xph ( > > ( ( ( phi (xph (xphi ( ph( ( ij> phk (xphk ( = (xx ) > showing that also admits representation of the form ( )where the new (possibly infinitesequence of features (phk is identified in one-to-one way with the sequence ( (ph( ph we leave the proof of rule as an exercise (exercise example (polynomial kernelconsider xx with (xx ( hxx ) polynomial kernel where hxx xx this is an example of polynomial kernel combining the fact that sums and products of kernels are again kernels (rules and of theorem )we find thatsince hxx and the constant function are kernelsso are hxx and ( hxx ) by writing (xx ( ) ( ) ( )
14,998
we see that (xx can be written as the inner product in of the two feature vectors ph(xand ph( )where the feature map ph can be explicitly identified as ph( [ ]thusthe rkhs determined by can be explicitly identified with the space of functions ph( ) for some in the above example we could explicitly identify the feature map howeverin general feature map need not be explicitly available using particular reproducing kernel corresponds to using an implicit (possibly infinite dimensional!feature map that never needs to be explicitly computed representer theorem recall the setting discussed at the beginning of this we are given training data {(xi yi )}ni= and loss function that measures the fit to the dataand we wish to find function that minimizes the training losswith the addition of regularization termas described in section to do thiswe assume first that the class of prediction functions can be decomposed as the direct sum of an rkhs hdefined by kernel function rand another linear space of real-valued functions on xthat isg meaning that any element can be written as with and in minimizing the training loss we wish to penalize the term of but not the term specificallythe aim is to solve the functional optimization problem min loss(yi (xi ) kgk ghh = ( herewe use slight abuse of notationkgkh means khkh if as above in this waywe can view as the null space of the functional kgkh this null space may be emptybut typically has small dimension mfor example it could be the one-dimensional space of constant functionsas in example example (null spaceconsider again the setting of example for which we have feature vectors [ ]and consists of functions of the form xb each function can be decomposed as where xband given gwe have kgkh kbkand so the null space of the functional kgkh (that isthe set of all functions for which kgkh is the set of constant functions herewhich has dimension regularization favors elements in and penalizes large elements in as the regularization parameter varies between zero and infinitysolutions to ( vary from "complex( to "simple( key reason why rkhss are so useful is the following by choosing to be an rkhs in ( this functional optimization problem effectively becomes parametric
14,999
kernel trick optimization problem the reason is that any solution to ( can be represented as finite-dimensional linear combination of kernel functionsevaluated at the training sample this is known as the kernel trick theorem representer theorem the solution to the penalized optimization problem ( is of the form (xn ai (xi xi= ( )( = where { qm is basis of prooflet span xi clearlyf thenthe hilbert space can be represented as where is the orthogonal complement of in other wordsf is the class of functions ih xi ih iit followsby the reproducing kernel propertythat for all (xi xi ih nowtake any and write it as with and by the definition of the null space we have kgk moreoverby pythagorastheoremthe latter is equal to it follows that loss(yi (xi )gkgk loss(yi (xi (xi ) = = loss(yi (xi (xi ) = since we can obtain equality by taking this implies that the minimizer of the penalized optimization problem ( lies in the subspace of and hence is of the form ( substituting the representation ( of into ( gives the finite-dimensional optimization problemn min loss(yi (ka qe) akaarn erm = ( where is the (grammatrix with entries [ (xi ) nj nq is the matrix with entries [ (xi ) nj