tag:blogger.com,1999:blog-68270560874365794592017-07-23T03:01:25.904-07:00Random analyses in MATLABThis is a blog on statistics and data analysis in MATLAB written by <a href="http://kendrickkay.net/">Kendrick</a>. Basically, stuff that I think is cool, interesting, or useful. To run the MATLAB code on this site, you need to download <a href="http://github.com/kendrickkay/knkutils/">Kendrick's MATLAB utilities</a>.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.comBlogger21125tag:blogger.com,1999:blog-6827056087436579459.post-13580596735818367742014-01-26T16:56:00.003-08:002014-01-26T16:56:40.207-08:00Learning MATLABThe new version of the <b>Statistics and data analysis in MATLAB</b> class is available at:<br /><br /> <a href="http://artsci.wustl.edu/~kkay/psych5007/">http://artsci.wustl.edu/~kkay/psych5007/</a><br /><br />The basics of MATLAB programming is now covered, with accompanying lecture videos.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com24tag:blogger.com,1999:blog-6827056087436579459.post-74913887727219110372012-04-05T10:19:00.002-07:002012-04-05T10:19:32.748-07:00New statistics classPsych216A: Statistics and data analysis in MATLAB (Spring 2012)<br /> <a href="http://white.stanford.edu/%7Eknk/Psych216A/">http://white.stanford.edu/~knk/Psych216A/</a><br />Lecture videos and other materials are available online.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com4tag:blogger.com,1999:blog-6827056087436579459.post-81342608098238931242012-01-14T00:34:00.000-08:002012-01-14T00:36:49.426-08:00Principal components analysisPrincipal components analysis (PCA) is useful for data exploration and dimensionality reduction. In this post, we will see that (1) PCA is just an application of SVD, (2) PCs define an orthogonal coordinate system such that in this system the data are uncorrelated, (3) PCs maximize the variance explained in the data, and (4) we can often use a small number of PCs to reconstruct (or approximate) a given set of data.<br /><br /><b><span style="font-size: large;">What are principal components (PCs)?</span></b><br /><br />Suppose we have a data matrix X with dimensions m x n, where each row corresponds to a different data point and each column corresponds to a different attribute. (For example, if we measured the height and weight of 1000 different people, then we could construct a data matrix with dimensions 1000 x 2.) Furthermore, let's presume that the mean of each column has been subtracted off (why this is important will become clear later). If we take the <a href="http://randomanalyses.blogspot.com/2011/12/svd-and-covariance-matrices.html">SVD</a> of X, we obtain matrices U, S, and V such that X = U*S*V'. The columns of V are mutually orthogonal unit-length vectors; these are the principal components (PCs) of X.<br /><br /><b><span style="font-size: large;">SVD on the data matrix or the covariance matrix</span></b><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-wduUats3Qrg/TxEs6SuI41I/AAAAAAAAA7Y/iYA5qcv_GTI/s1600/018-pca-001.png" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><a href="http://3.bp.blogspot.com/-eeKNULWKS_E/TxEs6jptB3I/AAAAAAAAA7g/xGHjg6lRuJ8/s1600/018-pca-002.png"></a> <a href="http://4.bp.blogspot.com/-kY0YDbg4yA4/TxEs64mqyNI/AAAAAAAAA7o/Ikrgy5P7JMg/s1600/018-pca-003.png"></a> <a href="http://4.bp.blogspot.com/-xsb50bks2wk/TxEs7M6_OvI/AAAAAAAAA7w/UqVo6TjF7ms/s1600/018-pca-004.png"></a> <a href="http://2.bp.blogspot.com/-SQjnXoWOMg8/TxEs7jE0IPI/AAAAAAAAA74/VVwapBTaEVk/s1600/018-pca-005.png"></a> <b><span style="font-size: large;"></span></b>Instead of taking the SVD of X (the data matrix), we can take the SVD of X'*X (the covariance matrix). This is because X'*X = (V*S'*U')*(U*S*V') = V*S.^2*V' (where S.^2 is a square matrix with the square of the diagonal entries of S along the diagonal). So, if we take the SVD of X'*X, the resulting V matrix should be identical to that obtained when taking the SVD of X (except for potential sign flips of the columns). A potential benefit of taking the SVD of the covariance matrix is reduced computational time. For example, if m >> n, then X is a large matrix of size m x n whereas X'*X is a small matrix of size n x n.<br /><br /><b><span style="font-size: large;">PCA decorrelates data</span></b><br /><br />One way to think about PCA is that it decorrelates the data matrix. Prior to PCA, the columns of the data matrix may have some correlation with each other, i.e. the dot-product of any given pair of columns of X may be non-zero. What PCA does is to provide a linear transformation of the data such that after the transformation, the columns of the data matrix are uncorrelated with one another. The transformation is specifically given by multiplication with the matrix V.<br /><br />What exactly does multiplication with V do? V has dimensions n x n and contains the principal components (PCs) in the columns. Given a point in n-dimensional space (i.e. a vector of dimensions 1 x n), if we multiply that point with V, what we are doing is projecting the point onto each of the PCs. This yields the coordinates of the point with respect to the space defined by the PCs. Since the PCs form an orthogonal basis, all we are really doing is rotating the space.<br /><br />Now let's see what happens when we take the data matrix X and multiply it with V. Since X = U*S*V', X*V = U*S*V'*V = U*S. The result, U*S, has the property that the columns are uncorrelated with one another. The reason is that the columns of U are already mutually orthgonal by way of the SVD; and since S is a diagonal matrix, multiplication with S simply rescales the columns of U, which does not change the condition of mutual orthogonality.<br /><br />% Let's see an example. Here we create 1000 points<br />% in two-dimensional space.<br />X = zeromean(randnmulti(1000,[],[1 .6; .6 1],[1 .5]),1);<br />[U,S,V] = svd(X,0);<br />figure; setfigurepos([100 100 500 250]);<br />subplot(1,2,1); hold on;<br />scatter(X(:,1),X(:,2),'r.');<br />axis equal;<br />h1 = drawarrow([0 0],V(:,1)','k-',[],10,'LineWidth',2);<br />h2 = drawarrow([0 0],V(:,2)','b-',[],10,'LineWidth',2);<br />legend([h1 h2],{'PC 1' 'PC 2'});<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />title('Data');<br />subplot(1,2,2); hold on;<br />X2 = X*V;<br />V2 = (V'*V)';<br />scatter(X2(:,1),X2(:,2),'r.');<br />axis equal;<br />h1 = drawarrow([0 0],V2(:,1)','k-',[],10,'LineWidth',2);<br />h2 = drawarrow([0 0],V2(:,2)','b-',[],10,'LineWidth',2);<br />legend([h1 h2],{'PC 1' 'PC 2'});<br />xlabel('Projection onto PC 1');<br />ylabel('Projection onto PC 2');<br />title('Data');<br /><br /><a href="http://4.bp.blogspot.com/-wduUats3Qrg/TxEs6SuI41I/AAAAAAAAA7Y/iYA5qcv_GTI/s1600/018-pca-001.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-wduUats3Qrg/TxEs6SuI41I/AAAAAAAAA7Y/iYA5qcv_GTI/s1600/018-pca-001.png" /></a><br /><br />% In the first panel, we plot the data as red dots. Notice <br />% that the two dimensions are moderately correlated with each other.<br />% By taking the SVD of the data, we obtain the PCs, and we plot<br />% the PCs as black and blue arrows. Notice that the PCs are <br />% orthogonal to each other. Also, notice that the first PC points <br />% in the direction along which the data tends to lie.<br />%<br />% In the second panel, we project the data onto the PCs and<br />% re-plot the data. Notice that the only thing that has <br />% happened is that the space has been rotated. In this new <br />% space, the data points are uncorrelated and the PCs are <br />% now aligned with the coordinate axes.<br /><br />% Let's look at another example. Whereas in the previous example<br />% we ensured that the columns of the data matrix were zero-mean,<br />% in this example, we will intentionally make the columns have<br />% non-zero means.<br />X = randnmulti(100,[1 -2],[1 .7; .7 1],[1 .5]);<br />% (now repeat the code above)<br /><br /><a href="http://3.bp.blogspot.com/-eeKNULWKS_E/TxEs6jptB3I/AAAAAAAAA7g/xGHjg6lRuJ8/s1600/018-pca-002.png"><img border="0" src="http://3.bp.blogspot.com/-eeKNULWKS_E/TxEs6jptB3I/AAAAAAAAA7g/xGHjg6lRuJ8/s1600/018-pca-002.png" /></a><br /><br />% In this example, the first PC again points in the direction along<br />% which the data tend to lie. (Actually, strictly speaking, the <br />% first PC points in the opposite direction. But there is a sign <br />% ambiguity in SVD --- the signs of corresponding columns of the U <br />% and V matrices can be flipped with no change to the overall math.<br />% So if we wanted to, we could simply flip the sign of the first PC (which <br />% corresponds to the first column of V) and also flip the sign of <br />% the first column of U.) Notice that the first PC does not <br />% point in the direction of the elongation of the cloud of points; <br />% rather, the first PC points towards the middle of the cloud. <br />% The reason this happens is that the columns of the data matrix were not <br />% mean-subtracted (i.e. centered), and as it turns out, the primary effect<br />% in the data is displacement from the origin.<br />%<br />% (One consequence of neglecting to center each column is that the columns <br />% of the data matrix after projection onto the PCs may have some correlation <br />% with one another. After projection onto the PCs, it is guaranteed only<br />% that the dot-product of the columns is zero. Correlation (r) involves<br />% more than just a dot-product; it involves both mean-subtraction and<br />% unit-length-normalization before computing the dot product. Thus, there<br />% is no guarantee that the columns are uncorrelated. Indeed, in the<br />% previous example, the correlation after projection on the PCs is r=-0.23.)<br />%<br />% Whether or not to subtract off the mean of each data column before computing<br />% the SVD depends on the nature of the data --- it is up to you to decide.<br /><br /><b><span style="font-size: large;">PCs point towards maximal variance in the data</span></b><br /><br />% Principal components have a particular ordering --- each principal<br />% component points in the direction of maximal variance that is <br />% orthogonal to each of the previous principal components. In this<br />% way, each principal component accounts for the maximal possible<br />% amount of variance, ignoring the variance already accounted for<br />% by the previous principal components. (With respect to explaining<br />% variance, it would be pointless for a given vector to be <br />% non-orthogonal to all previous ones; the extra descriptive <br />% power afforded by a vector lies only in the component of the <br />% vector that is orthogonal to the existing subspace.)<br /><br />% Let's see an example.<br />X = zeromean(randnmulti(50,[],[1 .8; .8 1],[1 .5]),1);<br />figure; setfigurepos([100 100 500 250]);<br />subplot(1,2,1); hold on;<br />scatter(X(:,1),X(:,2),'r.');<br />axis equal;<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />title('Data');<br />subplot(1,2,2); hold on;<br />h = [];<br />for p=1:size(X,1)<br /> h(p) = plot([0 X(p,1)],[0 X(p,2)],'k-');<br />end<br />scatter(X(:,1),X(:,2),'r.');<br />scatter(0,0,'g.');<br />axis equal; ax = axis;<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />title('Variance of the data');<br /><br /><a href="http://4.bp.blogspot.com/-kY0YDbg4yA4/TxEs64mqyNI/AAAAAAAAA7o/Ikrgy5P7JMg/s1600/018-pca-003.png"><img border="0" src="http://4.bp.blogspot.com/-kY0YDbg4yA4/TxEs64mqyNI/AAAAAAAAA7o/Ikrgy5P7JMg/s1600/018-pca-003.png" /></a><br /><br />% In the first plot, we simply plot the data. Before<br />% proceeding, we need to understand what it means to explain<br />% variance in data. Variance (without worrying about <br />% mean-subtraction or the normalization term) is simply the <br />% sum of the squares of the values in a given set of data. <br />% Now, since the sum of the squares of the coordinates of a <br />% data point is the same as the square of the distance of the <br />% data point from the origin, we can think of variance as <br />% equivalent to squared distance. To illustrate this, in the<br />% second plot we have drawn a black line between each data<br />% point and the origin. The aggregate of all of the black lines<br />% can be thought of as representing the variance of the data.<br />% If we have a model that is attempting to fit the data, we<br />% can ask how close the model comes to the data points. <br />% The closer that the model is to the data, the more variance<br />% the model explains in the data. Currently, without a model,<br />% our model fit is simply the origin, and we have 100% of the<br />% variance left to explain.<br />%<br />% What we would like to determine is the direction that accounts<br />% for maximal variance in the data. That is, we are looking for<br />% a vector such that if we were to use that vector to fit the <br />% data points, the fitted points would be as close to the data<br />% as possible. <br />figure; setfigurepos([100 100 500 250]);<br />subplot(1,2,1); hold on;<br />direction = unitlength([.2 1]');<br />Xproj = X*direction*direction';<br />h = [];<br />for p=1:size(X,1)<br /> h(p) = plot([Xproj(p,1) X(p,1)],[Xproj(p,2) X(p,2)],'k-');<br />end<br />h0 = scatter(X(:,1),X(:,2),'r.');<br />h1 = scatter(Xproj(:,1),Xproj(:,2),'g.');<br />axis(ax);<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />title('Variance remaining for sub-optimal direction');<br />subplot(1,2,2); hold on;<br />[U,S,V] = svd(X,0);<br />direction = V(:,1);<br />Xproj = X*direction*direction';<br />h = [];<br />for p=1:size(X,1)<br /> h(p) = plot([Xproj(p,1) X(p,1)],[Xproj(p,2) X(p,2)],'k-');<br />end<br />h0 = scatter(X(:,1),X(:,2),'r.');<br />h1 = scatter(Xproj(:,1),Xproj(:,2),'g.');<br />axis(ax);<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />title('Variance remaining for optimal direction');<br /><br /><a href="http://4.bp.blogspot.com/-xsb50bks2wk/TxEs7M6_OvI/AAAAAAAAA7w/UqVo6TjF7ms/s1600/018-pca-004.png"><img border="0" src="http://4.bp.blogspot.com/-xsb50bks2wk/TxEs7M6_OvI/AAAAAAAAA7w/UqVo6TjF7ms/s1600/018-pca-004.png" /></a><br /><br />% In the first plot, we have deliberately chosen a sub-optimal<br />% direction (the direction points slightly to the right of vertical).<br />% Using the given direction, we have determined the best possible fit<br />% to each data point; the fitted points are shown in green. The <br />% distance from the fitted points to the actual data points is<br />% indicated by black lines. In the second plot, we have chosen the<br />% optimal direction, namely, the first principal component of the data.<br />% Notice that the total distance from the fitted points to the actual data <br />% points is much smaller in the second case than in the first. This<br />% reflects the fact that the first principal component explains much<br />% more variance than the direction we chose in the first plot.<br />%<br />% The idea, then, is to repeat this process iteratively --- first,<br />% we determine the vector that approximates the data as best as<br />% possible, then we add in a second vector that improves the<br />% approximation as much as possible, and so on.<br /><br /><b><span style="font-size: large;">Singular values indicate variance explained</span></b><br /><br />% A nice characteristic of PCA is that the PCs define<br />% an orthogonal coordinate system. Because of this property, <br />% the incremental improvements with which the PCs approximate<br />% the data are exactly additive.<br />%<br />% (To see why, imagine you have a point that is located at (x,y,z).<br />% The squared distance to the origin is x^2+y^2+z^2.<br />% If we use the x-axis to approximate the point,<br />% the model fit is (x,0,0) and the remaining distance is<br />% y^2+z^2. If we then use the y-axis to approximate the point,<br />% the model fit is (x,y,0) and the remaining distance is<br />% z^2. Finally, if we use the z-axis to approximate the point,<br />% the model fit is (x,y,z) and there is zero remaining distance.<br />% Thus, due to the geometric properties of Euclidean space, all<br />% of the variance components add up exactly.)<br />%<br />% A little math can show that that the variance accounted <br />% for by individual principal components is given by the square of<br />% diagonal elements of the matrix S (which are also known as<br />% the singular values).<br />%<br />% The proportion of the total variance in a dataset that<br />% is accounted for by the first N PCs, where N ranges<br />% from 1 to the number of dimensions in the data can<br />% be calculated simply as<br />% cumsum(diag(S).^2) / sum(diag(S).^2) * 100.<br />% This sequence of percentages is useful when choosing<br />% a small number of PCs to summarize a dataset.<br />% We will see an example of this below.<br /><br /><b><span style="font-size: large;">Matrix reconstruction</span></b><br /><br />% In data exploration, it is often useful to look at the big<br />% effects in the data. A quick and dirty technique is to identify<br />% a small number of PCs that define a subspace within which<br />% most of the data resides.<br />temp = unitlength(rand(100,9),1);<br />X = randnmulti(11,rand(1,9),temp'*temp);<br />[U,S,V] = svd(X,0);<br />varex = cumsum(diag(S).^2) / sum(diag(S).^2) * 100;<br />Xapproximate = [];<br />for p=1:size(X,2)<br /> % this is a nice trick for using the first p principal<br /> % components to approximate the data matrix. we leave it<br /> % to the reader to verify why this works.<br /> Xapproximate(:,:,p) = U(:,1:p)*S(1:p,1:p)*V(:,1:p)';<br />end<br />mn = min(X(:));<br />mx = max(X(:));<br />figure; setfigurepos([100 100 500 500]);<br />for p=1:size(X,2)<br /> subplot(3,3,p); hold on;<br /> imagesc(Xapproximate(:,:,p),[mn mx]);<br /> axis image; axis off;<br /> title(sprintf('PC %d, %.1f%% Variance',p,varex(p)));<br />end<br /><br /><a href="http://2.bp.blogspot.com/-SQjnXoWOMg8/TxEs7jE0IPI/AAAAAAAAA74/VVwapBTaEVk/s1600/018-pca-005.png"><img border="0" src="http://2.bp.blogspot.com/-SQjnXoWOMg8/TxEs7jE0IPI/AAAAAAAAA74/VVwapBTaEVk/s1600/018-pca-005.png" /></a><br /><br />% What we have done here is to create a dataset (dimensions 11 x 9)<br />% and then use an increasing number of PCs to approximate the data.<br />% The full dataset corresponds to "PC 9" where we use all 9<br />% PCs to approximate the data. Notice that using just 3 PCs<br />% allows us to account for 92.9% of the variance in the original data.<br />% <br />% A useful next step might be to try and interpret the first 3 PCs<br />% and/or to visualize the data projected onto the first 3 PCs.<br />% If we gained an understanding of what is happening in the<br />% first 3 PCs of the data, it would probably be safe to deem that we<br />% have a good understanding of the data.<br /><br />Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com2tag:blogger.com,1999:blog-6827056087436579459.post-90634326598149810872011-12-28T21:10:00.001-08:002011-12-28T21:19:30.688-08:00SVD and covariance matricesThis post describes singular value decomposition (SVD) and how it applies to covariance matrices. As we will see in later posts, SVD and covariance matrices are central to understanding principal components analysis (PCA), linear regression, and multivariate Gaussian probability distributions.<br /><br /><b><span style="font-size: large;">SVD</span></b><br /><br />Singular value decomposition (SVD) decomposes a matrix X into three matrices U, S, and V such that:<br /> X = U*S*V'<br />The columns of U are mutually orthogonal unit-length vectors (these are called the left singular vectors); the columns of V are mutually orthogonal unit-length vectors (these are called the right singular vectors); and S is a matrix with zeros everywhere except for non-negative values along the diagonal (these values are called the singular values).<br /><br />% Let's see an example.<br />X = randn(10,3);<br />[U,S,V] = svd(X);<br />figure; setfigurepos([100 100 500 150]);<br />subplot(1,3,1); imagesc(U'*U); axis image square; colorbar; title('U^TU');<br />subplot(1,3,2); imagesc(V'*V); axis image square; colorbar; title('V^TV');<br />subplot(1,3,3); imagesc(S); axis image square; colorbar; title('S');<br /><br /><a href="http://3.bp.blogspot.com/-ADNyA9NwCDM/Tvv2wtE25HI/AAAAAAAAA64/l95N0gTrALM/s1600/017-svdandcovariancematrices-001.png"><img border="0" src="http://3.bp.blogspot.com/-ADNyA9NwCDM/Tvv2wtE25HI/AAAAAAAAA64/l95N0gTrALM/s1600/017-svdandcovariancematrices-001.png" /></a> <br /><br />Notice that U'*U and V'*V are identity matrices. This makes sense because the sum of the squares of the coordinates of a unit-length vector equals one and because the dot product of orthogonal vectors equals zero.<br /><br /><b><span style="font-size: large;">Covariance matrices</span></b><br /><br />Suppose X represents a set of linear regressors. That is, suppose the dimensions of X are m x n, representing n regressors defined in an m-dimensional space. Then, X'*X is an n x n covariance matrix, representing the covariance of each regressor with each other regressor. (Technically, this is not quite correct: true covariance involves subtracting off the mean of each regressor before calculating the pairwise dot-products and dividing the results by the number of elements in each regressor (m). Nevertheless, it is still useful to think of X'*X as a covariance matrix.)<br /><br /><b><span style="font-size: large;">SVD applied to covariance matrices</span></b><br /><br />SVD provides a useful decomposition of covariance matrices:<br /> X'*X = (U*S*V')'*(U*S*V') = V*S'*U' * U*S*V' = V*S.^2*V'<br />where S.^2 is a diagonal matrix consisting of zeros everywhere except for non-negative values along the diagonal (these values are the square of the values in the original S matrix).<br /><br />X'*X is an n x n matrix that can be interpreted as applying a linear operation to n-dimensional space. For instance, suppose we have a vector v with dimensions n x 1. This vector corresponds to a specific point in n-dimensional space. If we calculate (X'*X)*v, we obtain a new vector corresponding to a new point in n-dimensional space.<br /><br />The SVD decomposition of X'*X gives us insight into the nature of the linear operation. Specifically, the SVD decomposition tells us that the linear operation consists of three successive operations: a rotation, a scaling, and then an undoing of the original rotation:<br /><ol><li>The initial rotation comes from multiplication with V'. This is because the columns of V form a complete orthonormal basis (each column is a unit-length vector and all columns are orthogonal to one another). Multiplication of V' and v projects the vector v onto the basis. Change of basis simply rotates the coordinate system.</li><li>The scaling comes from multiplication with S.^2. Since S.^2 is a diagonal matrix, multiplication with S.^2 simply stretches the coordinate system along the axes of the coordinate system.</li><li>The final rotation comes from multiplication with V. This multiplication undos the rotation that was performed by the initial multiplication with V'.</li></ol>% Let's see an example.<br />X = unitlength(zeromean(randnmulti(1000,[],[1 .6; .6 1]),1),1);<br />figure; setfigurepos([100 100 300 300]);<br />imagesc(X'*X,[0 1]);<br />axis image square;<br />colorbar;<br />title('Covariance matrix: X^TX');<br /><br /><a href="http://2.bp.blogspot.com/-VFxgqpplv2E/Tvv2w4efrdI/AAAAAAAAA7A/HUzMhJarRvw/s1600/017-svdandcovariancematrices-002.png"><img border="0" src="http://2.bp.blogspot.com/-VFxgqpplv2E/Tvv2w4efrdI/AAAAAAAAA7A/HUzMhJarRvw/s1600/017-svdandcovariancematrices-002.png" /></a> <br /><br />% In this example there are 2 regressors defined in a 1000-dimensional space.<br />% The regressors are moderately correlated with one another, as can be seen<br />% in the relatively high off-diagonal term in the covariance matrix.<br /><br />% Let's use SVD to interpret the operation performed by the covariance matrix.<br />[U,S,V] = svd(X);<br />angs = linspace(0,2*pi,100);<br />vectors = {};<br />circlevectors = {};<br />vectors{1} = eye(2);<br />circlevectors{1} = [cos(angs); sin(angs)];<br />vectors{2} = V'*vectors{1};<br />circlevectors{2} = V'*circlevectors{1};<br />vectors{3} = S'*S*vectors{2};<br />circlevectors{3} = S'*S*circlevectors{2};<br />vectors{4} = V*vectors{3};<br />circlevectors{4} = V*circlevectors{3};<br /> % now perform plotting<br />figure; setfigurepos([100 100 600 200]);<br />colors = 'rg';<br />titles = {'Original space' '(1) Rotate' '(2) Scale' '(3) Un-rotate'};<br />for p=1:4<br /> subplot(1,4,p);<br /> axis([-1.8 1.8 -1.8 1.8]); axis square; axis off;<br /> for q=1:2<br /> drawarrow(zeros(1,2),vectors{p}(:,q)',[colors(q) '-'],[],5);<br /> end<br /> plot(circlevectors{p}(1,:),circlevectors{p}(2,:),'k-');<br /> title(titles{p});<br />end<br /><br /><a href="http://2.bp.blogspot.com/-GlIQiAnQ3jc/Tvv2xEhOMqI/AAAAAAAAA7I/q7wKQyghHzI/s1600/017-svdandcovariancematrices-003.png"><img border="0" src="http://2.bp.blogspot.com/-GlIQiAnQ3jc/Tvv2xEhOMqI/AAAAAAAAA7I/q7wKQyghHzI/s1600/017-svdandcovariancematrices-003.png" /></a> <br /><br />% The first plot shows a circle in black and equal-length vectors oriented <br />% along the coordinate axes in red and green. The second plot shows what<br />% happens after the initial rotation operation. The third plot shows what<br />% happens after the scaling operation. The fourth plot shows what happens<br />% after the final rotation operation. Notice that in the end result, the<br />% red and green vectors are no longer orthogonal and the circle is now an ellipse.<br /><br />% Let's visualize the linear operation once more, but this time let's use<br />% the red and green vectors to represent the columns of the V matrix.<br />vectors = {};<br />vectors{1} = V;<br />vectors{2} = V'*vectors{1};<br />vectors{3} = S'*S*vectors{2};<br />vectors{4} = V*vectors{3};<br />% run the "now perform plotting" section above<br /><br /><a href="http://4.bp.blogspot.com/-VSn13s4-3QE/Tvv2xewFbhI/AAAAAAAAA7Q/Z_PhvZ-_mbk/s1600/017-svdandcovariancematrices-004.png"><img border="0" src="http://4.bp.blogspot.com/-VSn13s4-3QE/Tvv2xewFbhI/AAAAAAAAA7Q/Z_PhvZ-_mbk/s1600/017-svdandcovariancematrices-004.png" /></a><br /><br />% Compare the first plot to the last plot. Notice that red and green vectors<br />% have not changed their angle but only their magnitude --- this reflects the<br />% fact that the columns of the V matrix are eigenvectors of the covariance matrix.<br />% Multiplication by the covariance matrix scales the space along the directions <br />% of the eigenvectors, which has the consequence that only the magnitudes of<br />% the eigenvectors, and not their angles, are modified.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com2tag:blogger.com,1999:blog-6827056087436579459.post-5229838664379910882011-12-25T23:41:00.000-08:002011-12-25T23:41:34.317-08:00Importance of error bars on data points when evaluating model qualityWhen measuring some quantity, the same value might not be obtained across repeated measurements --- in other words, noise may exist in the measurements. Some portion of the noise might be due to factors that can be controlled for, and if these factors are controlled for, then the noise level could potentially be reduced. However, for sake of the present discussion, let's assume that the noise present in a given situation can neither be controlled nor predicted.<br /><br />When evaluating how well a model characterizes a given dataset, it is important to take into account the intrinsic noisiness of the data. This is because a model cannot be expected to predict all of the data, as doing so would imply that the model predicts even the portion of the data that is due to noise (which, by our definition, is unpredictable).<br /><br />Let's illustrate these ideas using a simple set of examples:<br /><br /><a href="http://1.bp.blogspot.com/-631OCflmv9k/Tvgc5PTuXyI/AAAAAAAAA6U/A6bHk5WYUFI/s1600/016-noiseandmodelevaluation-01.png"><img border="0" src="http://1.bp.blogspot.com/-631OCflmv9k/Tvgc5PTuXyI/AAAAAAAAA6U/A6bHk5WYUFI/s1600/016-noiseandmodelevaluation-01.png" /></a><br /><br />The top-left scatter plot shows some data (black dots) and a linear model (red line). The model does fairly well capturing the dependency of the y-coordinate on the x-coordinate. However, there are no error bars on the data points, so we lack critical information. Our interpretation of the model and the data will be quite different, depending on the size of the error bars.<br /><br />The upper-right scatter plot shows one possible case: the error on the data points is relatively small. In this case there is a large amount of variance in the data that is both real (i.e. not simply attributable to noise) and not accounted for by the model.<br /><br />The lower-left scatter plot shows another possible case: the error on the data points is relatively large. In this case there appears to be very little variance in the data that is both real and not accounted for by the model.<br /><br />The lower-right scatter plot shows one last case: the error on the data points is extremely large. This case is in fact nearly impossible, and suggests that the error bar estimates are inaccurate. We will examine this case in more detail later.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% CASE A:<br />% We have a model and it is the optimal characterization of the data.<br />% The only failure of the model is its inability to predict the<br />% noise in the data. (Note that this case corresponds to case 2 <br />% in the initial set of examples.)<br />noiselevels = [10 2];<br />figure(999); setfigurepos([100 100 500 250]); clf;<br />for p=1:length(noiselevels)<br /> x = linspace(1,9,50);<br /> y = polyval([1 4],x);<br /> subplot(1,2,p); hold on;<br /> measurements = bsxfun(@plus,y,noiselevels(p)*randn(10,50)); % 10 measurements per data point<br /> mn = mean(measurements,1);<br /> se = std(measurements,[],1)/sqrt(10);<br /> h1 = scatter(x,mn,'k.');<br /> h2 = errorbar2(x,mn,se,'v','k-');<br /> h3 = plot(ax(1:2),polyval([1 4],ax(1:2)),'r-');<br /> xlabel('x'); ylabel('y');<br /> legend([h1 h2(1) h3],{'Data' 'Error bars' 'Model'},'Location','SouthEast');<br /> r2 = calccod(y,mn);<br /> dist = calccod(repmat(y,[10000 1]),bsxfun(@plus,y,sqrt(mean(se.^2))*randn(10000,50)),2);<br /> title(sprintf('R^2 = %.0f%%; maximum R^2 = [%.0f%% %.0f%%]',r2,prctile(dist,2.5),prctile(dist,97.5)));<br /> if p==1<br /> ax = axis;<br /> end<br /> axis(ax);<br />end<br /><br /><a href="http://1.bp.blogspot.com/-gP1W4V74CjI/Tvgc5ktTQgI/AAAAAAAAA6c/bAVgJVEh8N0/s1600/016-noiseandmodelevaluation-02.png"><img border="0" src="http://1.bp.blogspot.com/-gP1W4V74CjI/Tvgc5ktTQgI/AAAAAAAAA6c/bAVgJVEh8N0/s1600/016-noiseandmodelevaluation-02.png" /></a><br /><br />% We have conducted two simulations. In both simulations, there is a simple linear<br />% relationship between x and y. Assuming this linear relationship, we have generated<br />% 10 measurements of y for various fixed values of x, and we have plotted the mean and <br />% standard error of these measurements. In the simulation on the left, the noise level<br />% is relatively high, whereas in the simulation on the right, the noise level is relatively<br />% low. The R^2 between the model and the data is 35% and 92% for the first and<br />% second simulations, respectively. (For now, ignore the reported "maximum R^2"<br />% values, as these will be explained later.)<br />%<br />% What these simulations demonstrate is that (1) an optimal model can produce very<br />% different R^2 values, depending on the level of noise in the data and (2) if a model<br />% passes through or close to the error bars around most data points, the model may be<br />% nearly, if not fully, optimal.<br /><br />% CASE B:<br />% We have a model and it characterizes the data only to a limited extent.<br />% The model fails to characterize aspects of the data that do not <br />% appear to be merely due to measurement noise. (Note that this case <br />% corresponds to case 1 in the initial set of examples.)<br />noiselevel = 2;<br />figure(998); setfigurepos([100 100 250 250]); clf; hold on;<br />x = linspace(1,9,50);<br />y = polyval([1 4],x);<br />measurements = bsxfun(@plus,y + 4*randn(1,50),noiselevel*randn(10,50));<br />mn = mean(measurements,1);<br />se = std(measurements,[],1)/sqrt(10);<br />h1 = scatter(x,mn,'k.');<br />h2 = errorbar2(x,mn,se,'v','k-');<br />h3 = plot(ax(1:2),polyval([1 4],ax(1:2)),'r-');<br />xlabel('x'); ylabel('y');<br />legend([h1 h2(1) h3],{'Data' 'Error bars' 'Model'},'Location','SouthEast');<br />r2 = calccod(y,mn);<br />dist = calccod(repmat(y,[10000 1]),bsxfun(@plus,y,sqrt(mean(se.^2))*randn(10000,50)),2);<br />title(sprintf('R^2 = %.0f%%; maximum R^2 = [%.0f%% %.0f%%]',r2,prctile(dist,2.5),prctile(dist,97.5)));<br /><br /><a href="http://1.bp.blogspot.com/-UmnGfjxexak/Tvgc53kIkpI/AAAAAAAAA6k/btuLI_vE9Kk/s1600/016-noiseandmodelevaluation-03.png"><img border="0" src="http://1.bp.blogspot.com/-UmnGfjxexak/Tvgc53kIkpI/AAAAAAAAA6k/btuLI_vE9Kk/s1600/016-noiseandmodelevaluation-03.png" /></a><br /><br />% In this simulation, there is an overall linear relationship between x and y<br />% (as indicated by the red line). However, this relationship does not fully <br />% characterize the y-values --- there are many data points that deviate <br />% substantially (many standard errors away) from the model. The R^2 between <br />% the model and the data is just 7%.<br />%<br />% Given the relatively low noise level in the data, it intuitively seems that<br />% a much higher R^2 value should be possible. For example, perhaps there is<br />% another regressor (besides the x-coordinate) that, if added to the model, <br />% would substantially improve the model's accuracy. Without getting into the<br />% issue of how to improve the current model, we can use a simple set of <br />% simulations to confirm that the current R^2 value is indeed not optimal.<br />% First, we assume that the current model's prediction of the data points<br />% are the true (noiseless) y-values. Then, we generate new sets of<br />% data by simulating measurements of the true y-values; for these<br />% measurements, we match the noise level to that observed in the actual<br />% dataset. Finally, we calculate the R^2 between the model and the<br />% simulated datasets. The 95% confidence interval on the resulting R^2<br />% values is indicated as the "maximum R^2" values in the title of the figure.<br />%<br />% The results of the simulations show that for our example,<br />% the R^2 value that we can expect to attain is at least 88%. This<br />% confirms our suspicion that the current R^2 value of 7% is suboptimal.<br /><br />% CASE C:<br />% We have a model and it appears to characterize the data quite well.<br />% In fact, the regularity and predictability of the data appear to be<br />% higher than what would be expected based on the large error bars<br />% on the data points. (Note that this case corresponds to case 3 in the<br />% initial set of examples.)<br />noiselevel = 10;<br />figure(997); setfigurepos([100 100 250 250]); clf; hold on;<br />x = linspace(1,9,30);<br />y = polyval([1 4],x);<br />measurements = bsxfun(@plus,y,noiselevel*randn(10,30));<br />mn = mean(measurements,1);<br />se = std(measurements,[],1)/sqrt(10);<br />mn = .3*mn + .7*y; % a simple hack to make the data better than it should be<br />h1 = scatter(x,mn,'k.');<br />h2 = errorbar2(x,mn,se,'v','k-');<br />h3 = plot(ax(1:2),polyval([1 4],ax(1:2)),'r-');<br />xlabel('x'); ylabel('y');<br />legend([h1 h2(1) h3],{'Data' 'Error bars' 'Model'},'Location','SouthEast');<br />r2 = calccod(y,mn);<br />dist = calccod(repmat(y,[10000 1]),bsxfun(@plus,y,sqrt(mean(se.^2))*randn(10000,30)),2);<br />title(sprintf('R^2 = %.0f%%; maximum R^2 = [%.0f%% %.0f%%]',r2,prctile(dist,2.5),prctile(dist,97.5)));<br /><br /><a href="http://2.bp.blogspot.com/-RYFXyd1sbJI/Tvgc6Pn5rUI/AAAAAAAAA6s/sQ-LL_jaaLw/s1600/016-noiseandmodelevaluation-04.png"><img border="0" src="http://2.bp.blogspot.com/-RYFXyd1sbJI/Tvgc6Pn5rUI/AAAAAAAAA6s/sQ-LL_jaaLw/s1600/016-noiseandmodelevaluation-04.png" /></a> <br /><br />% In this simulation, the model characterizes the data quite well,<br />% with an R^2 value of 80%. However, performing the same<br />% "maximum R^2" calculations described earlier, we find<br />% that the 95% confidence interval on the R^2 values that we<br />% can expect given the noise level in the data is [2%, 58%].<br />% This indicates that the model is, in a sense, doing too well.<br />% The problem might be due to inaccuracy of the error bar<br />% estimates: perhaps the error bar estimates are larger than<br />% they should be, or perhaps the errors on different data points<br />% are not independent of one another (the assumption of<br />% independence is implicit in the calculation of the<br />% maximum R^2 values). Resolving tricky problems like<br />% these requires detailed inspection of the data on a case-<br />% by-case basis.<br /><br />Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-67338021901189862582011-12-14T22:11:00.000-08:002012-02-08T08:17:45.638-08:00Noise, model complexity, and overfittingWhen building models, ideally we would (1) have plenty of data available and (2) know what model to apply to the data. If these conditions hold, then we would simply need to estimate the parameters of the model and verify that the model is indeed accurate. The problem is that in reality, these conditions often do not hold. That is, we often have limited or noisy data and we often do not know what model is appropriate for the data at hand. In these cases, we are faced with the task of trying different models and determining which model is best.<br /><br />It is useful to think of model complexity as a dimension along which models vary. Complex, flexible models have the potential to describe many different types of functions. The advantage of such models is that the true model (i.e. the model that most accurately characterizes the data) may in fact be contained in the set of models that can be described. The disadvantage of such models is that they have many free parameters, and it may be difficult to obtain good parameter estimates with limited or noisy data. (Or, stated another way: when fitting such models to limited or noisy data, there is a strong possibility of overfitting the data --- that is, a strong possibility that random variability in the data will have too much influence on parameter estimates and give rise to an inaccurate model.) On the other hand, simple, less flexible models describe fewer types of functions compared to complex models. The advantage of simple models is that they have fewer free parameters, and so it becomes feasible to obtain good parameter estimates with limited or noisy data. (Or, stated another way: there is low likelihood that simple models will overfit the data.) The disadvantage of simple models is that the types of functions that can be described may be poor approximations to the true underlying function.<br /><br />A priori, it is impossible to say whether a complex or simple model will yield the most accurate model estimate for a given dataset (since it depends on the amount of data available, the nature of the effect, etc.). We must empirically try different models and evaluate their performance, e.g. using cross-validation. In this post, we provide a simple example illustrating these concepts.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Construct training dataset.<br />x = linspace(-2,2,30);<br />modelfun = @(x) x.^4 - 20*x.^3 + 5*x.^2 + 4*x + 1;<br />y = feval(modelfun,x) + 50*randn(1,length(x));<br /><br />% Construct testing dataset.<br />xval = repmat(linspace(-2,2,30),[1 10]);<br />yval = feval(modelfun,xval) + 50*randn(1,300);<br /><br />% Define max polynomial degrees to try.<br />degstodo = 1:10;<br />degs = [1 3 10]; % display these<br /><br />% Make a figure.<br />results = []; ax1 = []; ax2 = [];<br />figure(999); setfigurepos([100 100 800 600]); clf;<br />for dd=1:length(degstodo)<br /> makemodel = @(x) catcell(2,arrayfun(@(n) x'.^n,0:degstodo(dd),'UniformOutput',0));<br /> X = feval(makemodel,x);<br /> recboot = fitprfstatic(X,y',0,0,[],100,[],[],[],@calccod); % bootstrap<br /> rec = fitprfstatic(X,y',0,0,[],0,[],[],[],@calccod); % full fit<br /> pred = feval(makemodel,xval)*rec.params';<br /> results(dd,1) = rec.r; % training R^2<br /> results(dd,2) = calccod(pred,yval'); % testing R^2<br /> modelfits = [];<br /> for p=1:size(recboot.params,1)<br /> modelfits(p,:) = X*recboot.params(p,:)';<br /> end<br /> iii = find(ismember(degs,degstodo(dd)));<br /> if ~isempty(iii)<br /> subplot(length(degs),2,(iii-1)*2+1); hold on;<br /> h1 = scatter(x,y,'k.');<br /> mn = median(modelfits,1);<br /> se = stdquartile(modelfits,1,1);<br /> [d,ix] = sort(x);<br /> h2 = errorbar3(x(ix),mn(ix),se(ix),'v',[1 .8 .8]);<br /> h3 = plot(x(ix),mn(ix),'r-');<br /> h4 = plot(x(ix),feval(modelfun,x(ix)),'b-');<br /> uistack(h1,'top');<br /> xlabel('x'); ylabel('y');<br /> legend([h1 h3 h2 h4],{'Data' 'Model fit' 'Error bars' 'True model'},'Location','NorthEastOutside');<br /> ax1(iii,:) = axis;<br /> axis(ax1(1,:));<br /> title(sprintf('Max polynomial degree = %d',degstodo(dd)));<br /><br /> subplot(length(degs),2,(iii-1)*2+2); hold on;<br /> bar(0:degstodo(dd),median(recboot.params,1));<br /> errorbar2(0:degstodo(dd),median(recboot.params,1),stdquartile(recboot.params,1,1),'v','r-');<br /> set(gca,'XTick',0:degstodo(dd));<br /> xlabel('Polynomial degree');<br /> ylabel('Estimated weight');<br /> ax = axis;<br /> axis([-1 max(degstodo)+1 ax(3:4)]);<br /> ax2(iii,:) = axis;<br /> end<br />end<br /><br /><a href="http://1.bp.blogspot.com/-3nWEzBiY_aU/TumFBJzbMtI/AAAAAAAAA10/OYMHH0iQtuY/s1600/015-noiseandmodelcomplexity-01.png"><img border="0" src="http://1.bp.blogspot.com/-3nWEzBiY_aU/TumFBJzbMtI/AAAAAAAAA10/OYMHH0iQtuY/s1600/015-noiseandmodelcomplexity-01.png" /></a><br /><br />% We have created a dataset in which there is a nonlinear relationship<br />% between x and y. The observed data points are indicated by black dots,<br />% and the true relationship is indicated by a blue line. We have fit three<br />% different models to the data: a linear model (max polynomial degree of 1),<br />% a cubic model (max polynomial degree of 3), and a model that includes<br />% polynomials up to degree 10. For each model, we have bootstrapped the<br />% fits, allowing us to see the variability of the model fit (pink error bars<br />% around the red line), as well as the variability of the parameter<br />% estimates (red error bars on the vertical black bars).<br />%<br />% There are several important observations:<br />% 1. The cubic model gives the most accurate model estimate. The linear model<br />% underfits the data, while the 10th degree model overfits the data.<br />% 2. In the cubic and 10th degree models, we estimate weights on higher<br />% polynomials, and this changes the estimated weights on the lower<br />% polynomials. The change in weights indicates that the lower and higher<br />% polynomials have correlated effects in the data. In general, correlated regressors <br />% are not necessarily problematic. However, in the given dataset, we do not<br />% have much data available and the influence of the higher polynomials is<br />% weak (in fact, there are no polynomials beyond degree 4 in the true model).<br />% Thus, we are better off ignoring the higher polynomials than attempting to<br />% include them in the model.<br /><br />% Because the data were generated from a known model, we can compare<br />% the various models to the known model in order to see which model is<br />% most accurate. However, in general, we do not have access to the true,<br />% underlying model. Instead, what we can do is to use cross-validation to<br />% quantify model accuracy.<br />figure(998); clf; hold on;h = plot(results,'o-');<br />h2 = straightline(calccod(feval(modelfun,xval),yval),'h','k-');<br />legend([h' h2],{'Training' 'Testing' 'Testing (true model)'});<br />set(gca,'XTick',1:length(degstodo),'XTickLabel',mat2cellstr(degstodo));<br />ax = axis;<br />axis([0 length(degstodo)+1 0 ax(4)]);<br />xlabel('Max polynomial degree');<br />ylabel('Variance explained (R^2)');<br /><br /><a href="http://1.bp.blogspot.com/-80txuyZ30nA/TumEI2oF_KI/AAAAAAAAA1o/eW4syS8At9s/s1600/015-noiseandmodelcomplexity-02.png"><img border="0" src="http://1.bp.blogspot.com/-80txuyZ30nA/TumEI2oF_KI/AAAAAAAAA1o/eW4syS8At9s/s1600/015-noiseandmodelcomplexity-02.png" /></a><br /><br />% In this figure, we see that increasing the maximum polynomial degree invariably<br />% increases performance on the training data. However, we see that increasing<br />% the maximum polynomial degree only increases performance on the testing data<br />% up to a degree of 3. Beyond degree 3, the additional higher-order polynomials<br />% causes the model to overfit the training data and produces model estimates<br />% that perform more poorly on the testing data. Thus, by using cross-validation we<br />% have found that the cubic model is the most accurate model.<br />%<br />% Notice that the performance of the cubic model is nevertheless still lower than<br />% that of the true model. In order to improve the performance of our model<br />% estimate, we either have to change our modeling strategy (e.g. change the<br />% model, use regularization in parameter estimation) or collect more data.<br />% But in the end, there is no magic bullet --- achieving imperfect models is<br />% an inevitable feature of model building!<br /><br />Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com2tag:blogger.com,1999:blog-6827056087436579459.post-64932730710819611492011-12-10T13:30:00.001-08:002011-12-10T15:47:25.813-08:00Model accuracy vs. model reliabilityIn previous posts, we looked at <a href="http://randomanalyses.blogspot.com/2011/12/basics-of-regression-and-model-fitting.html">cross-validation</a> and <a href="http://randomanalyses.blogspot.com/2011/12/using-bootstrapping-to-quantify-model.html">bootstrapping</a> in the context of regression. Cross-validation and bootstrapping are similar in that both involve resampling the data and then fitting a model of interest. In cross-validation, we fit a model of interest to a subset of the data and then evaluate how well the model predicts the remaining data. In bootstrapping, we create bootstrap samples by drawing with replacement from the original data and then fit a model of interest to each bootstrap sample. However, despite this superficial similarity, the two methods have fundamentally different purposes: cross-validation quantifies the accuracy of a model whereas bootstrapping quantifies the reliability of a model.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's distill the distinction between accuracy and reliability <br />% down to its core and look at a very simple example.<br />figure(999); clf; hold on;<br />h1 = scatter(1,7,'ro');<br />h2 = scatter(1,4,'bo');<br />h3 = errorbar2(1,4,1,'v','b-');<br />axis([0 2 0 12]);<br />legend([h1 h2 h3],{'True model' 'Estimated model' 'Error bars'});<br />ylabel('Value');<br />set(gca,'XTick',[]);<br /><br /><a href="http://4.bp.blogspot.com/-KC2wiED5vo8/TuPPpaG_pMI/AAAAAAAAA08/kGbbfzeCQNc/s1600/014-accuracyandreliability-01.png"><img border="0" src="http://4.bp.blogspot.com/-KC2wiED5vo8/TuPPpaG_pMI/AAAAAAAAA08/kGbbfzeCQNc/s1600/014-accuracyandreliability-01.png" /></a><br /><br />% In this example, we have a single number indicated by the red dot,<br />% and we are trying to match this number with a model. Through some<br />% means we have estimated a specific model, and the prediction of the <br />% model is indicated by the blue dot. Moreover, through some means we <br />% have estimated error bars on the model's prediction, and this is <br />% indicated by the blue line.<br /><br />% Now let's consider the accuracy and reliability of the estimated<br />% model. The accuracy of the model corresponds to how far the <br />% estimated model is away from the true model. The reliability <br />% of the model corresponds to how variable the estimated model is.<br />h4 = drawarrow([1.3 4.5],[1.03 4.52],'k-',[],10);<br />h5 = text(1.33,4.5,'Reliability');<br />h6 = plot([.95 .9 .9 .95],[7 7 4 4],'k-');<br />h7 = text(.88,5.5,'Accuracy','HorizontalAlignment','Right');<br /><br /><a href="http://4.bp.blogspot.com/-JLnSShJ0eLs/TuPPpxWcPBI/AAAAAAAAA1E/SQA1dLGl_HU/s1600/014-accuracyandreliability-02.png"><img border="0" src="http://4.bp.blogspot.com/-JLnSShJ0eLs/TuPPpxWcPBI/AAAAAAAAA1E/SQA1dLGl_HU/s1600/014-accuracyandreliability-02.png" /></a><br /><br />% Accuracy and reliability are not the same thing, although they do bear<br />% certain relationships to one another. For example, if reliability is <br />% low, then it is likely that accuracy is low. (Imagine that the error bar<br />% on a given model is very large. Then, we would expect that any given<br />% estimate of the model would be not well matched to the true model.)<br />% Conversely, if accuracy is high, then it is likely that reliability <br />% is also high. (If a model estimate predicts responses extremely <br />% well, then it is likely that the parameters of the model are well <br />% estimated.)<br />% <br />% However, an important case to keep in mind is that it is possible for a <br />% model to have high reliability but low accuracy. To see how this can <br />% occur, let's examine each possible configuration of accuracy and <br />% reliability.<br /><br />% CASE 1: MODEL IS RELIABLE AND ACCURATE.<br />% In this case, there are enough data to obtain good estimates of <br />% the parameters of the model, and the model is a good description <br />% of the data. Let's see an example (quadratic model fitted to<br />% quadratic data).<br />x = rand(1,100)*14 - 8;<br />y = -x.^2 + 2*x + 4 + 6*randn(1,100);<br />rec = fitprfstatic([x.^2; x; ones(1,length(x))]',y',0,0,[],100,[],[],[],@calccod);<br />figure(998); clf; hold on;<br />h1 = scatter(x,y,'k.');<br />ax = axis;<br />xx = linspace(ax(1),ax(2),100);<br />X = [xx.^2; xx; ones(1,length(xx))]';<br />modelfits = [];<br />for p=1:size(rec.params,1)<br /> modelfits(p,:) = X*rec.params(p,:)';<br />end<br />mn = median(modelfits,1);<br />se = stdquartile(modelfits,1,1);<br />h2 = errorbar3(xx,mn,se,'v',[.8 .8 1]);<br />h3 = plot(xx,mn,'b-');<br />h4 = plot(xx,-xx.^2 + 2*xx + 4,'r-');<br />uistack(h1,'top');<br />xlabel('x'); ylabel('y');<br />legend([h1 h4 h3 h2],{'Data' 'True model' 'Estimated model' 'Error bars'});<br />title('Model is reliable and accurate');<br /><br /><a href="http://3.bp.blogspot.com/-DEoWJX4fqzo/TuPPwuGdoQI/AAAAAAAAA1M/7teZgdrXofc/s1600/014-accuracyandreliability-03.png"><img border="0" src="http://3.bp.blogspot.com/-DEoWJX4fqzo/TuPPwuGdoQI/AAAAAAAAA1M/7teZgdrXofc/s1600/014-accuracyandreliability-03.png" /></a><br /><br />% CASE 2: MODEL IS RELIABLE BUT INACCURATE.<br />% In this case, there are enough data to obtain good estimates of <br />% the parameters of the model, but the model is a bad description <br />% of the data. Let's see an example (linear model fitted to<br />% quadratic data).<br />x = rand(1,100)*10 - 5;<br />y = x.^2 - 3*x + 4 + 1*randn(1,100);<br />rec = fitprfstatic([x; ones(1,length(x))]',y',0,0,[],100,[],[],[],@calccod);<br />figure(997); clf; hold on;<br />h1 = scatter(x,y,'k.');<br />ax = axis;<br />xx = linspace(ax(1),ax(2),100);<br />X = [xx; ones(1,length(xx))]';<br />modelfits = [];<br />for p=1:size(rec.params,1)<br /> modelfits(p,:) = X*rec.params(p,:)';<br />end<br />mn = median(modelfits,1);<br />se = stdquartile(modelfits,1,1);<br />h2 = errorbar3(xx,mn,se,'v',[.8 .8 1]);<br />h3 = plot(xx,mn,'b-');<br />h4 = plot(xx,xx.^2 - 3*xx + 4,'r-');<br />uistack(h1,'top');<br />xlabel('x'); ylabel('y');<br />legend([h1 h4 h3 h2],{'Data' 'True model' 'Estimated model' 'Error bars'});<br />title('Model is reliable but inaccurate');<br /><br /><a href="http://4.bp.blogspot.com/-BL7fSt-wYQ0/TuPPw9tALBI/AAAAAAAAA1U/EkHCV9NNn_M/s1600/014-accuracyandreliability-04.png"><img border="0" src="http://4.bp.blogspot.com/-BL7fSt-wYQ0/TuPPw9tALBI/AAAAAAAAA1U/EkHCV9NNn_M/s1600/014-accuracyandreliability-04.png" /></a><br /><br />% CASE 3: MODEL IS UNRELIABLE BUT ACCURATE.<br />% This is not a likely situation. Suppose there are insufficient data to<br />% obtain good estimates of the parameters of a model. This implies that<br />% the parameters would fluctuate widely from dataset to dataset, which in<br />% turn implies that the predictions of the model would also fluctuate widely<br />% from dataset to dataset. Thus, for any given dataset, it would be unlikely <br />% that the predictions of the estimated model would be well matched to the data.<br /><br />% CASE 4. MODEL IS UNRELIABLE AND INACCURATE.<br />% In this case, there are insufficient data to obtain good estimates of <br />% the parameters of the model, and this supplies a plausible explanation<br />% for why the model does not describe the data well. (Of course, it could<br />% be the case that even with sufficient data, the estimated model would <br />% still be a poor description of the data; see case 2 above.) Let's see<br />% an example of an unreliable and inaccurate model (Gaussian model<br />% fitted to Gaussian data, but only a few noisy data points are available).<br />x = linspace(1,100,20);<br />y = evalgaussian1d([40 10 10 2],x) + 10*randn(1,20);<br />model = {[30 20 5 0] [-Inf 0 -Inf -Inf; Inf Inf Inf Inf] @(pp,xx) evalgaussian1d(pp,xx)};<br />rec = fitprfstatic(x',y',model,[],[],100,[],[],[],@calccod);<br />figure(996); clf; hold on;<br />h1 = scatter(x,y,'k.');<br />ax = axis;<br />xx = linspace(ax(1),ax(2),100);<br />modelfits = [];<br />for p=1:size(rec.params,1)<br /> modelfits(p,:) = evalgaussian1d(rec.params(p,:),xx);<br />end<br />mn = median(modelfits,1);<br />se = stdquartile(modelfits,1,1);<br />h2 = errorbar3(xx,mn,se,'v',[.8 .8 1]);<br />h3 = plot(xx,mn,'b-');<br />h4 = plot(xx,evalgaussian1d([40 10 10 2],xx),'r-');<br />uistack(h1,'top');<br />xlabel('x'); ylabel('y');<br />legend([h1 h4 h3 h2],{'Data' 'True model' 'Estimated model' 'Error bars'});<br />title('Model is unreliable and inaccurate');<br /><br /><a href="http://1.bp.blogspot.com/-CRyrKBWO3dI/TuPPyllhyFI/AAAAAAAAA1c/MLaa2Q13GXE/s1600/014-accuracyandreliability-05.png"><img border="0" src="http://1.bp.blogspot.com/-CRyrKBWO3dI/TuPPyllhyFI/AAAAAAAAA1c/MLaa2Q13GXE/s1600/014-accuracyandreliability-05.png" /></a>Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-58216480563736743432011-12-03T21:55:00.001-08:002011-12-03T22:03:17.018-08:00Using bootstrapping to quantify model reliabilityWhen fitting a model to data, the accuracy with which we can estimate the parameters of the model depends on the amount of data available and the level of noise in the data. To quantify the reliability of model parameter estimates, a simple yet effective approach is to use bootstrapping.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Here's an example dataset.<br />x = 1:100;<br />y = evalgaussian1d([40 10 10 2],x) + 4*randn(1,100);<br />figure(999); clf; hold on;<br />h1 = scatter(x,y,'k.');<br />xlabel('x'); ylabel('y');<br /><br /><a href="http://4.bp.blogspot.com/-rt1K2-zqR3Q/TtsLoenZ1nI/AAAAAAAAA0M/vSY5OjG0l6I/s1600/013-bootstrappingandmodelreliability-01.png"><img border="0" src="http://4.bp.blogspot.com/-rt1K2-zqR3Q/TtsLoenZ1nI/AAAAAAAAA0M/vSY5OjG0l6I/s1600/013-bootstrappingandmodelreliability-01.png" /></a> <br /><br />% Let's presume that the model that describes these data is<br />% a one-dimensional Gaussian function. (Indeed, that's what<br />% we used to generate the data; see above.) Let's fit this<br />% model to the data using bootstrapping. Specifically, let's <br />% draw 100 bootstraps from the data points and fit the <br />% Gaussian model to each bootstrap.<br />model = {[30 20 5 0] [-Inf 0 -Inf -Inf; Inf Inf Inf Inf] @(pp,xx) evalgaussian1d(pp,xx)};<br />rec = fitprfstatic(x',y',model,[],[],100,[],[],[],@calccod);<br />modelfits = [];<br />for p=1:size(rec.params,1)<br /> modelfits(p,:) = evalgaussian1d(rec.params(p,:),x);<br />end<br />h2 = plot(modelfits','r-');<br />uistack(h1,'top');<br /><br /><a href="http://1.bp.blogspot.com/-H3WRaBVMsVE/TtsLorQ158I/AAAAAAAAA0U/Aj0xeAY81Xc/s1600/013-bootstrappingandmodelreliability-02.png"><img border="0" src="http://1.bp.blogspot.com/-H3WRaBVMsVE/TtsLorQ158I/AAAAAAAAA0U/Aj0xeAY81Xc/s1600/013-bootstrappingandmodelreliability-02.png" /></a> <br /><br />% Each red line is the model fit obtained from one of <br />% the bootstraps. By inspecting the variability of the<br />% red lines across bootstraps, we get a sense of the <br />% reliability of the model.<br /><br />% For better visualization, let's compute the mean and standard<br />% deviation across bootstraps. This gives us the mean model <br />% fit and its standard error.<br />delete(h2);<br />mn = mean(modelfits,1);<br />se = std(modelfits,[],1);<br />h3 = errorbar3(x,mn,se,'v',[1 .7 .7]);<br />h4 = plot(x,mn,'r-');<br />uistack(h1,'top');<br /><br /><a href="http://1.bp.blogspot.com/-nhgeVES-PVo/TtsLo_kKQUI/AAAAAAAAA0c/RRTMUXsJrMw/s1600/013-bootstrappingandmodelreliability-03.png"><img border="0" src="http://1.bp.blogspot.com/-nhgeVES-PVo/TtsLo_kKQUI/AAAAAAAAA0c/RRTMUXsJrMw/s1600/013-bootstrappingandmodelreliability-03.png" /></a><br /><br />% So far we have been examining the variability of the model fit.<br />% But we may be interested in the variability of the model parameters<br />% (which underlie the model fit). Let's look at how model parameters <br />% vary from bootstrap to bootstrap.<br />figure(998); clf; hold on;<br />np = size(rec.params,2);<br />bar(1:np,median(rec.params,1));<br />for p=1:size(rec.params,2)<br /> set(scatter(p*ones(1,100) - 0.1,rec.params(:,p),'r.'),'CData',[1 .7 .7]);<br />end<br />errorbar2(1:np,median(rec.params,1),stdquartile(rec.params,1,1),'v','r-','LineWidth',2);<br />set(gca,'XTick',1:np,'XTickLabel',{'Mean' 'Std Dev' 'Gain' 'Offset'});<br />xlabel('Parameter'); ylabel('Value');<br /><br /><a href="http://3.bp.blogspot.com/-L0glsK9MF6A/TtsLpBXTGUI/AAAAAAAAA0k/g5gcwaIDxnE/s1600/013-bootstrappingandmodelreliability-04.png"><img border="0" src="http://3.bp.blogspot.com/-L0glsK9MF6A/TtsLpBXTGUI/AAAAAAAAA0k/g5gcwaIDxnE/s1600/013-bootstrappingandmodelreliability-04.png" /></a><br /><br /> % In this figure, the light dots indicate the parameters obtained in<br />% different bootstraps, the black bars indicate the median across <br />% bootstraps, and the red error bars indicate the 68% confidence <br />% intervals calculated using percentiles (these confidence intervals<br />% are analogous to plus and minus one standard error in the case <br />% of Gaussian distributions).<br /><br />% There are two basic factors that determine the reliability of<br />% an estimated model: how many data points there are to fit the <br />% model and how noisy each individual data point is. First,<br />% let's look at an example that illustrates the impact of the <br />% number of data points.<br />nns = [20 80 320 1280];<br />noiselevels = [4 4 4 4];<br />figure(997); setfigurepos([100 100 500 400]); clf;<br />for p=1:length(nns)<br /> x = 1 + rand(1,nns(p))*99;<br /> y = evalgaussian1d([40 10 10 2],x) + noiselevels(p)*randn(1,nns(p));<br /> subplot(2,2,p); hold on;<br /> h1 = scatter(x,y,'k.');<br /> xlabel('x'); ylabel('y');<br /> if p == 1<br /> ax = axis;<br /> end<br /> model = {[30 20 5 0] [-Inf 0 -Inf -Inf; Inf Inf Inf Inf] @(pp,xx) evalgaussian1d(pp,xx)};<br /> rec = fitprfstatic(x',y',model,[],[],50,[],[],[],@calccod);<br /> modelfits = [];<br /> xx = 1:100;<br /> for q=1:size(rec.params,1)<br /> modelfits(q,:) = evalgaussian1d(rec.params(q,:),xx);<br /> end<br /> mn = mean(modelfits,1);<br /> se = std(modelfits,[],1);<br /> h3 = errorbar3(xx,mn,se,'v',[1 .7 .7]);<br /> h4 = plot(xx,mn,'r-');<br /> if p <= 2<br /> uistack(h1,'top');<br /> end<br /> axis(ax);<br /> title(sprintf('Number of data points = %d, noise level = %.1f',nns(p),noiselevels(p)));<br />end<br /><br /><a href="http://1.bp.blogspot.com/-uSHsSi3DoCk/TtsLpXYEA8I/AAAAAAAAA0s/usoX5N7_S1A/s1600/013-bootstrappingandmodelreliability-05.png"><img border="0" src="http://1.bp.blogspot.com/-uSHsSi3DoCk/TtsLpXYEA8I/AAAAAAAAA0s/usoX5N7_S1A/s1600/013-bootstrappingandmodelreliability-05.png" /></a><br /><br />% Notice that as the number of data points increases, the reliability<br />% of the estimated model increases. Next, let's look at an example<br />% that illustrates the impact of the noise level.<br />nns = [80 80 80 80];<br />noiselevels = [4 2 1 0.5];<br /><br /><a href="http://2.bp.blogspot.com/-Xe4DsjpGQhc/TtsLpq1a9sI/AAAAAAAAA00/ERcq49bDoyk/s1600/013-bootstrappingandmodelreliability-06.png"><img border="0" src="http://2.bp.blogspot.com/-Xe4DsjpGQhc/TtsLpq1a9sI/AAAAAAAAA00/ERcq49bDoyk/s1600/013-bootstrappingandmodelreliability-06.png" /></a> <br /><br />% Notice that as the noise level decreases, the reliability of the <br />% estimated model increases.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com1tag:blogger.com,1999:blog-6827056087436579459.post-87065396542222636732011-12-03T01:00:00.001-08:002011-12-03T12:29:48.240-08:00Basics of regression and model fittingIn regression, we build a model that uses one or more variables to predict some other variable. To understand regression, it is useful to play with simple two-dimensional data (where one variable is used to predict a second variable). An important aspect of regression is the use of cross-validation to evaluate the quality of different models.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's generate some data in two dimensions.<br />x = randn(1,200);<br />y = x.^2 + 3*x + 2 + 3*randn(1,200);<br />figure(999); clf; hold on;<br />h1 = scatter(x,y,'k.');<br />xlabel('x'); ylabel('y');<br /><br /><a href="http://2.bp.blogspot.com/-stAw6Y7s2PY/TtnldroKYhI/AAAAAAAAAzk/39PJm9oOUAE/s1600/012-regressionandmodelfitting-01.png"><img border="0" src="http://2.bp.blogspot.com/-stAw6Y7s2PY/TtnldroKYhI/AAAAAAAAAzk/39PJm9oOUAE/s1600/012-regressionandmodelfitting-01.png" /></a> <br /><br />% Through inspection of the scatterplot, we see that there<br />% appears to be a nonlinear relationship between x and y. <br />% We would like to build a model that quantitatively characterizes<br />% this relationship.<br /><br />% Let's consider two different models. One model is a purely linear <br />% model, y = a*x + b, where a and b are free parameters. The <br />% second model is a quadratic model, y = a*x^2 + b*x + c where <br />% a, b, and c are free parameters. We will assume that we want<br />% to minimize the squared error between the model and the data.<br />model1 = polyfit(x,y,1);<br />model2 = polyfit(x,y,2);<br />ax = axis;<br />xx = linspace(ax(1),ax(2),100);<br />h2 = plot(xx,polyval(model1,xx),'r-','LineWidth',2);<br />h3 = plot(xx,polyval(model2,xx),'g-','LineWidth',2);<br />axis(ax);<br />legend([h2 h3],{'Linear model' 'Quadratic model'});<br />title('Direct fit (no cross-validation)');<br /><br /><a href="http://2.bp.blogspot.com/-mSSxb4Nb8EU/Ttnldm24xoI/AAAAAAAAAzo/E7e4MEJxDEc/s1600/012-regressionandmodelfitting-02.png"><img border="0" src="http://2.bp.blogspot.com/-mSSxb4Nb8EU/Ttnldm24xoI/AAAAAAAAAzo/E7e4MEJxDEc/s1600/012-regressionandmodelfitting-02.png" /></a><br /><br />% Although the linear model captures the basic trend in the data, the<br />% quadratic model seems to characterize the data better. In particular, the<br />% linear model seems to overestimate the data at middle values of x and<br />% underestimate the data at low and high values of x.<br /><br />% How can we formally establish which model is best? A simple approach<br />% is to quantify the fit quality of each model using a metric like R^2<br />% (<a href="http://randomanalyses.blogspot.com/2011/11/coefficient-of-determination-r2.html">see the blog post on R^2</a>) and then determine which model has<br />% the higher fit quality. The problem with this approach is that the<br />% quadratic model will always outperform the linear model in<br />% terms of fit quality, even when the true underlying relationship<br />% is purely linear. The reason is that the quadratic model subsumes<br />% the linear model and includes one additional parameter (the<br />% weight on the x^2 term). Thus, the quadratic model will always do at<br />% least as well as the linear model and will do even better given the<br />% extra parameter (unless the weight on the x^2 term is estimated to be<br />% exactly zero).<br /><br />% A better approach is to quantify the prediction quality of each model<br />% using cross-validation. This approach is exactly the same as the<br />% first approach, except that the quality of fit is evaluated on new<br />% data points that are not used to fit the parameters of the model. The<br />% intuition is that the fit quality on the data points used for training<br />% will, on average, be an overestimate of the true fit quality (i.e. the<br />% expected fit quality of the estimated model for data points drawn from<br />% the underlying data distribution).<br /><br />% With cross-validation, there is no guarantee that the more complex<br />% quadratic model will outperform the linear model. The best performing<br />% model will be the one that, after parameter estimation, most closely<br />% characterizes the underlying relationship between x and y.<br /><br />% Let's use cross-validation in fitting the linear and<br />% quadratic models to our example dataset. Specifically,<br />% let's use leave-one-out cross-validation, a method in which we<br />% leave a single data point out, fit the model on the remaining data<br />% points, predict the left-out data point, and then repeat this whole <br />% process for every single data point. We will use the metric of R^2<br />% to quantify how closely the model predictions match the data.<br />model1 = fitprfstatic([x; ones(1,length(x))]',y',0,0,[],-1,[],[],[],@calccod);<br />model2 = fitprfstatic([x.^2; x; ones(1,length(x))]',y',0,0,[],-1,[],[],[],@calccod);<br />figure(998); setfigurepos([100 100 600 300]); clf;<br />subplot(1,2,1); hold on;<br />h1 = scatter(x,y,'k.');<br />ax = axis;<br />[d,ii] = sort(x);<br />h2 = plot(x(ii),model1.modelfit(ii),'r-','LineWidth',2);<br />axis(ax);<br />xlabel('x'); ylabel('y');<br />title(sprintf('Linear model; cross-validated R^2 = %.1f',model1.r));<br />subplot(1,2,2); hold on;<br />h3 = scatter(x,y,'k.');<br />ax = axis;<br />[d,ii] = sort(x);<br />h4 = plot(x(ii),model2.modelfit(ii),'g-','LineWidth',2);<br />axis(ax);<br />xlabel('x'); ylabel('y');<br />title(sprintf('Quadratic model; cross-validated R^2 = %.1f',model2.r));<br /><br /><a href="http://1.bp.blogspot.com/-lUYGo09Bdmw/Ttnld39n0CI/AAAAAAAAAz0/3AWNyp_BYjg/s1600/012-regressionandmodelfitting-03.png"><img border="0" src="http://1.bp.blogspot.com/-lUYGo09Bdmw/Ttnld39n0CI/AAAAAAAAAz0/3AWNyp_BYjg/s1600/012-regressionandmodelfitting-03.png" /></a> <br /><br />% In these plots, the red and green lines indicate the predictions<br />% of the linear and quadratic models, respectively. Notice that<br />% the lines are slightly wiggly; this reflects the fact that each<br />% predicted data point comes from a different model estimate.<br />% We find that the quadratic model achieves a higher<br />% cross-validated R^2 value than the linear model.<br /><br />% Let's repeat these simulations for another dataset.<br />x = randn(1,50);<br />y = .1*x.^2 + x + 2 + randn(1,50);<br /><br /><a href="http://2.bp.blogspot.com/-sYV2fQcdYd0/TtnlecYY3lI/AAAAAAAAAz8/BlGmsJZn00o/s1600/012-regressionandmodelfitting-04.png"><img border="0" src="http://2.bp.blogspot.com/-sYV2fQcdYd0/TtnlecYY3lI/AAAAAAAAAz8/BlGmsJZn00o/s1600/012-regressionandmodelfitting-04.png" /></a><br /><br /><a href="http://1.bp.blogspot.com/-E9CDIZ9bRkM/TtnlejdDRlI/AAAAAAAAA0E/h3eB86tAaBA/s1600/012-regressionandmodelfitting-05.png"><img border="0" src="http://1.bp.blogspot.com/-E9CDIZ9bRkM/TtnlejdDRlI/AAAAAAAAA0E/h3eB86tAaBA/s1600/012-regressionandmodelfitting-05.png" /></a><br /><br />% For this dataset, even though the underlying relationship between<br />% x and y is quadratic, we find that the quadratic model produces<br />% a lower cross-validated R^2 than the linear model. This indicates<br />% that we do not have enough data to reliably estimate the parameters<br />% of the quadratic model. Thus, we are better off estimating the linear<br />% model and using the linear model estimate as a description of the data.<br />% (For comparison purposes, the R^2 between the true model and the<br />% observed y-values, calculated as calccod(.1*x.^2 + x + 2,y), is 47.4.<br />% The linear model's R^2 is not as good as this, but is substantially<br />% better than the quadratic's model R^2.)<br /><br />Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-9138977004967478762011-11-27T20:17:00.001-08:002011-11-27T20:26:32.411-08:00Fitting probability distributions to dataHere we cover the very basics of probability distributions and how to fit them to data. We will see that one way to fit a probability distribution is to determine the parameters of the distribution that maximize the likelihood of the data.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's look at a basic probability distribution, the Gaussian <br />% distribution. In the one-dimensional case, there are two parameters,<br />% the mean and the standard deviation. Let's see an example.<br />mn = 3;<br />sd = 1;<br />fun = @(x) 1/(sd*sqrt(2*pi)) * exp(-(x-mn).^2/(2*sd^2));<br /><br />% We have defined a function that takes a value and returns <br />% the corresponding likelihood for a Gaussian distribution with <br />% mean 3 and standard deviation 1. This function is called<br />% a probability density function. Let's visualize it.<br />figure(999); clf; hold on;<br />xx = -1:.01:7;<br />h1 = plot(xx,feval(fun,xx),'r-','LineWidth',2);<br />xlabel('Value');<br />ylabel('Likelihood');<br />legend(h1,{'Probability density function'});<br /><br /><a href="http://1.bp.blogspot.com/--0GHmhfpsKw/TtMLsabPfvI/AAAAAAAAAy0/IU_RIfCtioU/s1600/011-probabilitydistributions-001.png"><img border="0" src="http://1.bp.blogspot.com/--0GHmhfpsKw/TtMLsabPfvI/AAAAAAAAAy0/IU_RIfCtioU/s1600/011-probabilitydistributions-001.png" /></a> <br /><br />% For a Gaussian distribution, about 95% of the time, values drawn <br />% from the distribution will lie within two standard deviations <br />% of the mean. In our example, this range is between 1 and 5.<br />xx2 = linspace(mn-2*sd,mn+2*sd,100);<br />h2 = bar(xx2,feval(fun,xx2),1);<br />set(h2,'FaceColor',[.8 .8 .8]);<br />set(h2,'EdgeColor','none');<br />uistack(h1,'top');<br /><br /><a href="http://4.bp.blogspot.com/-Kn1pO5dqXfk/TtMLsnHv3bI/AAAAAAAAAy8/jrK4qXx1Kg0/s1600/011-probabilitydistributions-002.png"><img border="0" src="http://4.bp.blogspot.com/-Kn1pO5dqXfk/TtMLsnHv3bI/AAAAAAAAAy8/jrK4qXx1Kg0/s1600/011-probabilitydistributions-002.png" /></a> <br /><br />% We have shaded in the area underneath the probability density function <br />% that lies between 1 and 5. If we were to calculate the actual area<br />% of this region, we would find that it is (approximately) 0.95. <br />% The total area underneath the probability density function is 1.<br />delete(h2);<br /><br />% Now let's consider the concept of calculating the likelihood of a<br />% set of data given a particular probability distribution. Using <br />% our example Gaussian distribution (mean 3, standard deviation 1),<br />% let's calculate the likelihood of a sample set of data.<br />data = [2.5 3 3.5];<br />h3 = straightline(data,'v','b-');<br />likelihood = prod(feval(fun,data));<br />legend([h1 h3(1)],{'Probability density function' 'Data points'});<br />title(sprintf('Likelihood of data points = %.6f',likelihood));<br /><br /><a href="http://3.bp.blogspot.com/-yNzP9b3SW2M/TtMLs43cn8I/AAAAAAAAAzE/WWU8_k25ge0/s1600/011-probabilitydistributions-003.png"><img border="0" src="http://3.bp.blogspot.com/-yNzP9b3SW2M/TtMLs43cn8I/AAAAAAAAAzE/WWU8_k25ge0/s1600/011-probabilitydistributions-003.png" /></a> <br /><br />% The likelihood of observing the data points 2.5, 3, and 3.5 is<br />% obtained by multiplying the likelihoods of each individual data<br />% point. (We are assuming that the data points are independent.)<br />% Now, for comparison, let's calculate the likelihood of a different<br />% set of data.<br />delete(h3);<br />data = [4 4.5 5];<br />h3 = straightline(data,'v','b-');<br />likelihood = prod(feval(fun,data));<br />legend([h1 h3(1)],{'Probability density function' 'Data points'});<br />title(sprintf('Likelihood of data points = %.6f',likelihood));<br /><br /><a href="http://3.bp.blogspot.com/-vvToUMdIzpk/TtMLtD6KkHI/AAAAAAAAAzM/tuvP6mP6UNI/s1600/011-probabilitydistributions-004.png"><img border="0" src="http://3.bp.blogspot.com/-vvToUMdIzpk/TtMLtD6KkHI/AAAAAAAAAzM/tuvP6mP6UNI/s1600/011-probabilitydistributions-004.png" /></a> <br /><br />% Notice that the likelihood of this new set of data is much smaller<br />% then that of the original set of data.<br /><br />% Now that we know how to calculate the likelihood of a set of data<br />% given a particular probability distribution, we can now think about<br />% how to actually fit a probability distribution to a set of data. <br />% All that we need to do is to set the parameters of the probability<br />% distribution such that the likelihood of the set of data is maximized.<br /><br />% Let's do an example. We have several data points and we want to fit<br />% a univariate (one-dimensional) Gaussian distribution to the data.<br />% To determine the optimal mean and standard deviation parameters,<br />% let's use brute force.<br />data = randn(1,100)*2.5 + 4;<br />fun = @(pp) sum(-log(1/(abs(pp(2))*sqrt(2*pi)) * exp(-(data-pp(1)).^2/(2*pp(2)^2))));<br />options = optimset('Display','iter','FunValCheck','on', ...<br /> 'MaxFunEvals',Inf,'MaxIter',Inf,'TolFun',1e-6,'TolX',1e-6);<br />params = fminsearch(fun,[0 1],options);<br /><br />% What we have done is to use fminsearch to determine the mean and standard<br />% deviation parameters that minimize the sum of the negative log likelihoods<br />% of the data points. (Maximizing the product of the likelihoods of the data<br />% points is equivalent to maximizing the log of the product of the likelihoods<br />% of the data points, which is equivalent to maximizing the sum of the log of <br />% the likelihood of each data point, which is equivalent to minimizing the <br />% sum of the negative log likelihood of the data points. Or, semi-formally:<br />% Let <ps> be a vector of likelihoods. Then,<br />% max prod(ps) <=> max log(prod(ps)) <br />% <=> max sum(log(ps)) <br />% <=> min sum(-log(ps))<br />% Note that taking the log of the likelihoods helps avoid numerical <br />% precision issues.) Let's visualize the results.<br />mn = params(1);<br />sd = abs(params(2));<br />fun = @(x) 1/(sd*sqrt(2*pi)) * exp(-(x-mn).^2/(2*sd^2));<br />figure(998); clf; hold on;<br />h2 = straightline(data,'v','k-');<br />ax = axis;<br />xx = linspace(ax(1),ax(2),100);<br />h1 = plot(xx,feval(fun,xx),'r-','LineWidth',2);<br />axis([ax(1:2) 0 max(feval(fun,xx))*1.1]);<br />xlabel('Value');<br />ylabel('Likelihood');<br />legend([h2(1) h1],{'Data' 'Model'});<br /><br /><a href="http://4.bp.blogspot.com/-UkjJBy5bmwg/TtMLtYXnIfI/AAAAAAAAAzQ/Dtz5iO7MThQ/s1600/011-probabilitydistributions-005.png"><img border="0" src="http://4.bp.blogspot.com/-UkjJBy5bmwg/TtMLtYXnIfI/AAAAAAAAAzQ/Dtz5iO7MThQ/s1600/011-probabilitydistributions-005.png" /></a> <br /><br />% Let's take the mean and standard deviation parameters that we found<br />% using optimization and compare them to the mean and standard deviation<br />% of the data points.<br />params<br />[mean(data) std(data,1)]<br /><br /><a href="http://3.bp.blogspot.com/-PYPSXNfDHQc/TtMLtqGwKtI/AAAAAAAAAzc/cJuxMPrGKGQ/s1600/011-probabilitydistributions-006.png"><img border="0" src="http://3.bp.blogspot.com/-PYPSXNfDHQc/TtMLtqGwKtI/AAAAAAAAAzc/cJuxMPrGKGQ/s1600/011-probabilitydistributions-006.png" /></a> <br /><br />% Notice that the two sets of results are more or less identical (the<br />% difference can be attributed to numerical precision issues). Thus, <br />% we see that computing the mean and standard deviation of a set of <br />% data can be viewed as implicitly fitting a one-dimensional <br />% Gaussian distribution to the data.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-59993810886979945632011-11-21T20:46:00.001-08:002011-11-22T21:34:50.699-08:00Peeking at P-values<div style="color: black;"><b>Guest Post by <a href="http://www-psych.stanford.edu/%7Ewinawer/">Jon </a></b></div>A common experimental practice is to collect data and then do a statistical test to evaluate whether the data differs significantly from a null hypothesis. Sometimes researchers peek at the data before data collection is finished and do a preliminary analysis of the data. If a statistical test indicates a negative result, more data is collected; if there is a positive result, data collection is stopped. This strategy invalidates the statistical test by inflating the likelihood of observing a false positive.<br /><br />In this post we demonstrate the amount of inflation obtained by this strategy. We sample from a distribution with a zero mean and test whether the sample mean differs from 0. As we will see, if we continually peek at the data, and then decide whether to continue data collection contingent on the partial results, we wind up with an elevated chance of rejecting the null hypothesis.<br /><br /><span style="font-size: large;">CODE</span><br />% Simulate 1000 experiments with 200 data points each<br />x = randn(200,1000);<br /><br />% We expect about 5% false positives, given an alpha of 0.05 <br />disp(mean(ttest(x, 0, 0.05))); <br /><br />% Now let's calculate the rate of false positives for different sample<br />% sizes. We assume a minimum of 10 samples and a maximum of 200.<br />h = zeros(size(x));<br />for ii = 10:200<br /> h(ii,:) = ttest(x(1:ii, :), 0, 0.05);<br />end<br /><br />% The chance of a false positive is about 0.05, no matter how many data points<br />figure(99);<br />plot(1:200, mean(h, 2), 'r-', 'LineWidth', 2)<br />ylim([0 1])<br />ylabel('Probability of a false positive')<br />xlabel('Number of samples') <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-rhhyT8uoltQ/TssvA9HCSiI/AAAAAAAADNk/n_frXi8etMo/s1600/falsepositives.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-rhhyT8uoltQ/TssvA9HCSiI/AAAAAAAADNk/n_frXi8etMo/s1600/falsepositives.png" /></a></div><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-tKNutTM70Fg/Tssv45KqvvI/AAAAAAAADNs/T68D80PhiEk/s1600/falsepositives2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div>% How would the false positive rate change if we peeked at the data?<br />% To simulate peeking, we take the cumulative sum of h values for each<br />% simulation. The result of this is that if at any point we reject the null<br />% (h=1), the remaining points for that simulation also assume we rejected<br />% null. <br />peekingH = logical(cumsum(h));<br /><br />figure(99); hold on<br />plot(1:200, mean(peekingH, 2), 'k-', 'LineWidth', 2)<br />legend('No peeking', 'Peeking')<br /><a href="http://1.bp.blogspot.com/-tKNutTM70Fg/Tssv45KqvvI/AAAAAAAADNs/T68D80PhiEk/s1600/falsepositives2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-tKNutTM70Fg/Tssv45KqvvI/AAAAAAAADNs/T68D80PhiEk/s1600/falsepositives2.png" /></a><br />% The plot demonstrates the problem with peeking: we defined the likelihood<br />% of a false positive as our alpha value (here, 0.05), but we have created<br />% a false positive rate that is much higher.Jonnoreply@blogger.com2tag:blogger.com,1999:blog-6827056087436579459.post-76730107493338564282011-11-19T22:21:00.001-08:002011-11-19T22:31:16.156-08:00Nonparametric probability distributionsParametric probability distributions (e.g. the Gaussian distribution) make certain assumptions about the distribution of a set of data. Instead of making these assumptions, we can use nonparametric methods to estimate probability distributions (such methods include bootstrapping, histograms, and kernel density estimation).<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Generate some data points.<br />x = 4 + randn(1,100).^2 + 0.2*randn(1,100);<br /><br />% Let's visualize the data.<br />figure(999); clf; hold on;<br />axis([0 12 0 0.8]); ax = axis;<br />h1 = straightline(x,'v','r-');<br />xlabel('Value');<br />ylabel('Probability');<br /><br /><a href="http://3.bp.blogspot.com/-kPi0WxjhWxE/Tsicsxa3Q6I/AAAAAAAAAyU/tbQEjiHWFeI/s1600/010-nonparametricdistributions-001.png"><img border="0" src="http://3.bp.blogspot.com/-kPi0WxjhWxE/Tsicsxa3Q6I/AAAAAAAAAyU/tbQEjiHWFeI/s1600/010-nonparametricdistributions-001.png" /></a><br /><br />% We have drawn a vertical line at each data point.<br />% Imagine that the vertical lines collectively<br />% define a probability distribution that has a value<br />% of 0 everywhere except for the data points <br />% where there are infinitely high spikes. This <br />% probability distribution is what the bootstrapping <br />% method is in fact implicitly using --- drawing <br />% random samples with replacement from the <br />% original data is equivalent to treating the data<br />% points themselves as defining the probability <br />% distribution estimate.<br /><br />% Now let's construct a histogram.<br />[nn,cc] = hist(x,20);<br />binwidth = cc(2)-cc(1);<br />h2 = bar(cc,nn/sum(nn) * (1/binwidth),1);<br /><br /><a href="http://4.bp.blogspot.com/-bHQ3xBrM11o/TsictDGXEyI/AAAAAAAAAyc/0nlAisDQlPs/s1600/010-nonparametricdistributions-002.png"><img border="0" src="http://4.bp.blogspot.com/-bHQ3xBrM11o/TsictDGXEyI/AAAAAAAAAyc/0nlAisDQlPs/s1600/010-nonparametricdistributions-002.png" /></a><br /><br />% The histogram is a simple nonparametric method for <br />% estimating the probability distribution associated with<br />% a set of data. It is nonparametric because it makes<br />% no assumption about the shape of the probability<br />% distribution. Note that we have scaled the histogram<br />% in such a way as to ensure that the area occupied by <br />% the histogram equals 1, which is a requirement for <br />% true probability distributions.<br /><br />% The simplest and most common method for summarizing a<br />% set of data is to compute the mean and standard deviation <br />% of the data. This can be seen as using a parametric<br />% probability distribution, the Gaussian distribution,<br />% to interpret the data. Let's make this connection<br />% explicit by plotting a Gaussian probability distribution<br />% whose mean and standard deviation are matched to that<br />% of the data.<br />mn = mean(x);<br />sd = std(x);<br />xx = linspace(ax(1),ax(2),100);<br />h3 = plot(xx,evalgaussian1d([mn sd 1/(sd*sqrt(2*pi)) 0],xx),'g-','LineWidth',2);<br /><br /><a href="http://2.bp.blogspot.com/-NxYSjPrvsYA/TsictRKTriI/AAAAAAAAAyk/Q2hSq7fioDQ/s1600/010-nonparametricdistributions-003.png"><img border="0" src="http://2.bp.blogspot.com/-NxYSjPrvsYA/TsictRKTriI/AAAAAAAAAyk/Q2hSq7fioDQ/s1600/010-nonparametricdistributions-003.png" /></a><br /><br />% Notice that the Gaussian distribution is reasonably<br />% matched to the data points and the histogram but tends<br />% to overestimate the data density at low values.<br /><br />% The histogram is a crude method since the probability<br />% distributions that it creates have discontinuities (the <br />% corners of the black rectangles). To avoid discontinuities,<br />% we can instead use kernel density estimation. In this method,<br />% we place a kernel at each data point and sum across the <br />% kernels to obtain the final probability distribution estimate.<br />% Let's apply kernel density estimation to our example using<br />% a Gaussian kernel with standard deviation 0.5. (To ensure<br />% visibility, we have made the height of each kernel higher than <br />% it actually should be.)<br />kernelwidth = 0.5;<br />vals = [];<br />h4 = [];<br />for p=1:length(x)<br /> vals(p,:) = evalgaussian1d([x(p) kernelwidth 1/(kernelwidth*sqrt(2*pi)) 0],xx);<br /> h4(p) = plot(xx,1/10 * vals(p,:),'c-');<br />end<br />h5 = plot(xx,mean(vals,1),'b-','LineWidth',2);<br /> % Note that we could have used MATLAB's ksdensity function to achieve<br /> % identical results:<br /> % vals = ksdensity(x,xx,'width',0.5);<br /> % h5 = plot(xx,vals,'b-');<br /><br />% Finally, add a legend.<br />legend([h1(1) h2(1) h3(1) h4(1) h5(1)],{'Data' 'Histogram' 'Gaussian' 'Kernels' 'Kernel Density Estimate'});<br /><br /><a href="http://3.bp.blogspot.com/-W7aRNrX5R8g/TsictmdgiUI/AAAAAAAAAys/l7o2VUnNJp4/s1600/010-nonparametricdistributions-004.png"><img border="0" src="http://3.bp.blogspot.com/-W7aRNrX5R8g/TsictmdgiUI/AAAAAAAAAys/l7o2VUnNJp4/s1600/010-nonparametricdistributions-004.png" /></a><br /><br />% Notice that the kernel density estimate is similar to the histogram, albeit smoother.<br /><br /><span style="font-size: large;"><b>ADDITIONAL ISSUES</b></span><br /><br />When constructing histograms, a free parameter is the number of bins to use. Typically, histograms are used as an exploratory method, so it would be quite acceptable to try out a variety of different values (so as to get a sense of what the data are like).<br /><br />Notice that the number of bins used in a histogram is analogous to the width of the kernel used in kernel density estimation.<br /><br />How do we choose the correct kernel width in kernel density estimation? If the kernel is too small, then we risk gaps in the resulting probability distribution, which is probably not representative of the true distribution. If the kernel is too large, then the resulting probability distribution will tend towards being uniform, which is also probably not representative of the true distribution. We will address selection of kernel width in a later post, but the short answer is that we can use cross-validation to choose the optimal kernel width.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-83963996043775702852011-11-12T17:17:00.000-08:002011-11-13T05:43:09.073-08:00Error bars on standard deviationsWe are often concerned with estimating the mean of a population. Given that we can obtain only a limited number of samples from the population, what we normally do is to compute the mean of the samples and then put an error bar on the mean.<br /><br />Now, sometimes we might be interested in the standard deviation of the population. Notice that the exact same problem arises --- with a limited number of samples from the population, we can obtain an estimate of the standard deviation of the population (by simply computing the standard deviation of the samples), but this is just an estimate. Thus, an error bar can be put on the standard deviation, too.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's perform a simulation. For several different sample sizes,<br />% we draw random samples from a normal distribution with mean 0<br />% and standard deviation 1. For each random sample, we compute<br />% the standard deviation of the sample. We then look at the<br />% distribution of the standard deviations across repeated simulations.<br />nn = 8;<br />ns = 2.^(1:nn);<br />cmap = cmapdistinct(nn);<br />figure(999); setfigurepos([100 100 700 250]); clf;<br />subplot(1,2,1); hold on;<br />h = []; sd = [];<br />for p=1:length(ns)<br />x = randn(100000,ns(p));<br />dist = std(x,[],2);<br />[n,x] = hist(dist,100);<br />h(p) = plot(x,n,'-','Color',cmap(p,:));<br />sd(p) = std(dist);<br />end<br />ax = axis;<br />legend(h,cellfun(@(x) ['n = ' x],mat2cellstr(ns),'UniformOutput',0));<br />xlabel('Standard deviation');<br />ylabel('Frequency');<br />title('Standard deviations obtained using different sample sizes');<br />subplot(1,2,2); hold on;<br />h2 = plot(1:nn,sd,'r-');<br />ax = axis; axis([0 nn+1 ax(3:4)]);<br />set(gca,'XTick',1:nn,'XTickLabel',mat2cellstr(ns));<br />xlabel('n');<br />ylabel('Standard deviation of distribution');<br />title('Spread of standard deviation estimates');<br /><br /><a href="http://3.bp.blogspot.com/-FGCPXSvIJ7M/Tr_IwYYE7DI/AAAAAAAAAyE/LypUUtlJzp4/s1600/009-errorbarsonstandarddeviations.png"><img border="0" src="http://3.bp.blogspot.com/-FGCPXSvIJ7M/Tr_IwYYE7DI/AAAAAAAAAyE/LypUUtlJzp4/s1600/009-errorbarsonstandarddeviations.png" /></a><br /><br />% Notice that with few data points, the standard deviations are highly variable.<br />% With increasing numbers of data points, the standard deviations are more tightly<br />% clustered around the true value of 1.<br />%<br />% To put some concrete numbers on this: at n = 32, the spread in the distribution<br />% (which we quantify using standard deviation) is about 0.12. This indicates that<br />% if we draw 32 data points from a normal distribution and compute the standard<br />% deviation of the data points, our standard deviation estimate is accurate only<br />% to about +/- 12%.<br /><br /><span style="font-size: large;"><b>FINAL OBSERVATIONS</b></span><br /><br />Recall that the standard error of the mean for a random sample drawn from a Gaussian distribution is simply the standard deviation of the sample divided by the square root of the number of data points. Notice that in computing the standard error estimate, we are implicitly estimating the standard deviation of the population. But we have just seen that this standard deviation estimate may be somewhat inaccurate, depending on the number of data points. This implies that standard errors are themselves subject to noise.<br /><br />To put it simply: we can put error bars on error bars. The error on error bars will tend to be high in the case of few data points.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-55597586178691089672011-11-08T00:15:00.000-08:002011-11-08T00:16:56.760-08:00Testing null hypotheses using randomization, the bootstrap, and Monte Carlo methodsEarlier we saw how <a href="http://randomanalyses.blogspot.com/2011/10/randomization-tests.html">randomization</a> can be used to test the null hypothesis that there is no meaningful order in a set of data. This is actually just a special case of the general statistical strategy of using randomness to see what happens under various null hypotheses. This general statistical strategy covers not only randomization methods but also Monte Carlo methods and the bootstrap. Let's look at some examples.<br /><br /><b><span style="font-size: large;">Example 1: Using randomization to test whether two sets of data are different (e.g. in their means)</span></b><br /><br />In this example, we use randomization to check whether two sets of data come from different probability distributions. Let's pose the null hypothesis that the two sets of data actually come from the same probability distribution. Under this hypothesis, the two sets of data are interchangeable, so if we aggregate the data points and randomly divide the data points into two sets, then the results should be comparable to results obtained from the original data.<br /><br />% Generate two sets of data.<br />x = randn(1,100) + 1.4;<br />y = randn(1,100) + 1;<br /><br />% Let's aggregate the data, randomly divide the data, and compute the resulting <br />% differences in means. And for comparison, let's compute the actual difference in means.<br />[pval,val,dist] = randomization([x y],2,@(a) mean(a(1:100))-mean(a(101:200)),10000,0);<br /><br />% Visualize the results.<br />figure(999); clf; hold on;<br />hist(dist,100);<br />h1 = straightline(val,'v','r-');<br />ax = axis; axis([-.8 .8 ax(3:4)]);<br />xlabel('Difference in the means');<br />ylabel('Frequency');<br />legend(h1,{'Actual observed value'});<br />title(sprintf('Results obtained using random assignments into two groups; p (two-tailed) = %.4f',pval));<br /><br /><a href="http://3.bp.blogspot.com/-MlJTDl8yQWk/TrjjNjeYhQI/AAAAAAAAAxE/Rg4gkDG8dvY/s1600/008-testingnull-001.png"><img border="0" src="http://3.bp.blogspot.com/-MlJTDl8yQWk/TrjjNjeYhQI/AAAAAAAAAxE/Rg4gkDG8dvY/s1600/008-testingnull-001.png" /></a><br /><br />% We see that the actual difference in means, 0.525, is quite unlikely with respect<br />% to the differences in means obtained via randomization. The p-value is 0.0002,<br />% referring to the proportion of the distribution that is more extreme than the<br />% actual difference in means. (Since we don't have an a priori reason to expect<br />% a positive or negative difference, we use a two-tailed p-value --- that is,<br />% not only do we count the number of values greater than 0.525, but we also count<br />% the number of values less than -0.525.) Thus, the null hypothesis is probably<br />% incorrect.<br /><br />% Note that strictly speaking, we have not shown that the two sets of data come<br />% from probability distributions with different means --- all we have done is to<br />% reject the hypothesis that the sets of data come from the same probability<br />% distribution.<br /><br /><span style="font-size: large;"><b>Example 2: Using the bootstrap to test whether two sets of data are different (e.g. in their means)</b></span><br /><br />In the previous example, we posed the null hypothesis that the two sets of data come from the same distribution and then used randomization to generate new datasets. Notice that instead of using randomization to generate new datasets, we can use the bootstrap. The difference is that in the case of randomization, we enforce the fact that none of the data points are repeated within a set of data nor across the two sets of data, whereas in the case of the bootstrap, we do not enforce these constraints --- we generate new datasets by simply drawing data points with replacement from the original set of data. (The bootstrap strategy seems preferable as it probably generates more representative random samples, but this remains to be proven...)<br /><br />% Let's go through an example using the same data from Example 2.<br />% Aggregate the data, draw bootstrap samples, and compute the resuling<br />% differences in means.<br />dist = bootstrap([x y],@(a) mean(a(1:100)) - mean(a(101:200)),10000);<br /><br />% Compute the actual observed difference in means as well as the corresponding p-value.<br />val = mean(x) - mean(y);<br />pval = sum(abs(dist) > abs(val)) / length(dist);<br /><br />% Visualize the results.<br />figure(998); clf; hold on;<br />hist(dist,100);<br />h1 = straightline(val,'v','r-');<br />ax = axis; axis([-.8 .8 ax(3:4)]);<br />xlabel('Difference in the means');<br />ylabel('Frequency');<br />legend(h1,{'Actual observed value'});<br />title(sprintf('Results obtained using bootstraps of aggregated data; p (two-tailed) = %.4f',pval));<br /><br /><a href="http://4.bp.blogspot.com/-ENpBuwInmH0/TrjjN7ZkTyI/AAAAAAAAAxI/uQ6C5l8h_DU/s1600/008-testingnull-002.png"><img border="0" src="http://4.bp.blogspot.com/-ENpBuwInmH0/TrjjN7ZkTyI/AAAAAAAAAxI/uQ6C5l8h_DU/s1600/008-testingnull-002.png" /></a><br /><br />% We find that the actual difference in means is quite unlikely with respect<br />% to the differences in means obtained via bootstrapping. Moreover, the p-value is<br />% quite similar to that obtained with the randomization method.<br /><br /><span style="font-size: large;"><b>Example 3: Using Monte Carlo simulations to test whether a sequence of numbers is different from noise</b></span><br /><br />The randomization (Example 1) and bootstrap (Example 2) methods that we have seen depend on having an adequate number of data points. With sufficient data points, the empirical data distribution can serve as a reasonable proxy for the true data distribution. However, when the number of data points is small, the empirical data distribution may be a poor proxy for the true data distribution and so statistical procedures based on the empirical data distribution may suffer.<br /><br />One way to reduce dependencies on the empirical data distribution is to bring in assumptions (e.g. priors) on what the true data distribution is like. For example, one simple assumption is that the data follows a Gaussian distribution. Let's look at an example.<br /><br />% Suppose we observe the following data, where the x-axis is an <br />% independent variable (ranging from 1 to 5) and the y-axis<br />% is a dependent (measured) variable.<br />x = [1.5 3.2 2 2.5 4];<br />figure(997); clf; hold on;<br />scatter(1:5,x,'ro');<br />axis([0 6 0 6]);<br />xlabel('Independent variable');<br />ylabel('Dependent variable');<br />val = calccorrelation(1:5,x);<br />title(sprintf('r = %.4g',val));<br /><br /><a href="http://2.bp.blogspot.com/-f2M1-Xl9cnI/TrjjOFNcYhI/AAAAAAAAAxU/b4PirvO_ynw/s1600/008-testingnull-003.png"><img border="0" src="http://2.bp.blogspot.com/-f2M1-Xl9cnI/TrjjOFNcYhI/AAAAAAAAAxU/b4PirvO_ynw/s1600/008-testingnull-003.png" /></a><br /><br />% Although the observed correlation is quite high, how do we know<br />% the data aren't just random noise? Let's be more precise:<br />% Let's pose the null hypothesis that the data are simply random <br />% draws from a fixed Gaussian distribution. Under this hypothesis,<br />% what are correlation values that we would expect to obtain?<br />% For sake of principles, let's take random draws from a <br />% Gaussian distribution matched to the observed data with respect<br />% to mean and standard deviation, although because correlation is<br />% insensitive to gain and offset, matching the observed data<br />% in terms of mean and standard deviation is not actually necessary.<br />fake = mean(x) + std(x)*randn(10000,5);<br />dist = calccorrelation(repmat(1:5,[10000 1]),fake,2);<br />pval = sum(dist > val) / length(dist);<br />figure(996); clf; hold on;<br />hist(dist,100);<br />h1 = straightline(val,'v','r-');<br />xlabel('Correlation');<br />ylabel('Frequency');<br />legend(h1,{'Actual observed correlation'});<br />title(sprintf('Results obtained using Monte Carlo simulations; p (one-tailed) = %.4f',pval));<br /><br /><a href="http://2.bp.blogspot.com/-rg9MV6TrXaA/TrjjOoxzvBI/AAAAAAAAAxc/zbq8jIpiWno/s1600/008-testingnull-004.png"><img border="0" src="http://2.bp.blogspot.com/-rg9MV6TrXaA/TrjjOoxzvBI/AAAAAAAAAxc/zbq8jIpiWno/s1600/008-testingnull-004.png" /></a><br /><br />% Notice that the p-value is just 0.10, which means that about 10% of the time,<br />% purely random data would result in a correlation value as extreme as the <br />% one actually observed. Thus, the null hypothesis that the data are purely<br />% random Gaussian data is somewhat likely.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com1tag:blogger.com,1999:blog-6827056087436579459.post-25398188273810381652011-11-02T10:18:00.000-07:002011-11-02T10:18:00.994-07:00Coefficient of determination (R^2)For the purposes of model evaluation, it is preferable to use coefficient of determination (R^2) as opposed to correlation (r) or squared correlation (r^2). This is because correlation implicitly includes offset and gain parameters in the model. Instead of using correlation, the proper approach is to estimate the offset and gain parameters as part of the model-building process and then to use a metric (like R^2) that is sensitive to offset and gain. Note that when using R^2, getting the offset and gain estimated accurately is extremely important (otherwise, you can obtain low or even negative R^2 values).<br /><br /><b><span style="font-size: large;">CONCEPTS</span></b><br /><br />In a previous post on <a href="http://randomanalyses.blogspot.com/2011/10/correlation.html">correlation</a>, we saw that the correlation between two sets of numbers is insensitive to the offset and gain of each set of numbers. Moreover, we saw that if you are using correlation as a means for measuring how well one set of numbers predicts another, then you are implicitly using a linear model with free parameters (an offset parameter and a gain parameter). This implicit usage of free parameters may be problematic --- two potential models may yield the same correlation even though one model gets the offset and gain completely wrong. What we would like is a metric that does not include these implicit parameters. A good choice for such a metric is the coefficient of determination (R^2), which is closely related to correlation.<br /><br />Let <model> be a set of numbers that is a candidate match for another set of numbers, <data>. Then, R^2 is given by:<br /> 100 * (1 - sum((model-data).^2) / sum((data-mean(data)).^2))<br />There are two main components of this formula. The first component is the sum of the squares of the residuals (which are given by model-data). The second component is the sum of the squares of the deviation of the data points from their mean (this is the same as the variance of the data, up to a scale factor). So, intuitively, R^2 quantifies the size of the residuals relative to the size of the data and is expressed in terms of percentage. The metric is bounded at the top at 100% (i.e. zero residuals) and is unbounded at the bottom (i.e. there is no limit on the size of the residuals). This stands in contrast to the metric of r^2, which is bounded between -100% and 100%.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's see an example of R^2 and r^2.<br />% Generate a sample dataset and a sample model.<br />data = 5 + randn(50,1);<br />model = .6*(data-5) + 4.5 + 0.3*randn(50,1);<br /><br />% Now, calculate R^2 and r^2 and visualize.<br />R2 = calccod(model,data);<br />r2 = 100 * calccorrelation(model,data)^2;<br />figure(999); setfigurepos([100 100 600 220]); clf;<br />subplot(1,2,1); hold on;<br />h1 = plot(data,'k-');<br />h2 = plot(model,'r-');<br />axis([0 51 0 8]);<br />xlabel('Data point');<br />ylabel('Value');<br />legend([h1 h2],{'Data' 'Model'},'Location','South');<br />title(sprintf('R^2 = %.4g%%, r^2 = %.4g%%',R2,r2));<br />subplot(1,2,2); hold on;<br />h3 = scatter(model,data,'b.');<br />axis square; axis([0 8 0 8]); axissquarify; axis([0 8 0 8]);<br />xlabel('Model');<br />ylabel('Data');<br /><br /><a href="http://1.bp.blogspot.com/-JOQLmLqgoBg/TrF6BvUlmjI/AAAAAAAAAwY/G0OCK9csRFc/s1600/007-coefficientofdetermination-001.png"><img border="0" src="http://1.bp.blogspot.com/-JOQLmLqgoBg/TrF6BvUlmjI/AAAAAAAAAwY/G0OCK9csRFc/s1600/007-coefficientofdetermination-001.png" /></a><br /><br />% Notice that r^2 is larger than R^2. The reason for this is that r^2<br />% implicitly includes offset and gain parameters. To see this clearly,<br />% let's first apply an explicit offset parameter to the model and<br />% re-run the "calculate and visualize" code above.<br />model = model - mean(model) + mean(data);<br /><br /><a href="http://3.bp.blogspot.com/-3hE0QBWnAmE/TrF6B7aYCAI/AAAAAAAAAwc/-_GhlG1RMyA/s1600/007-coefficientofdetermination-002.png"><img border="0" src="http://3.bp.blogspot.com/-3hE0QBWnAmE/TrF6B7aYCAI/AAAAAAAAAwc/-_GhlG1RMyA/s1600/007-coefficientofdetermination-002.png" /></a><br /><br />% Notice that by allowing the model to get the mean right, we have <br />% closed some of the gap between R^2 and r^2. There is one more <br />% step: let's apply an explicit gain parameter to the model and<br />% re-run the "calculate and visualize" code above. (To do this<br />% correctly, we have to estimate both the offset and gain<br />% simultaneously in a linear regression model.)<br />X = [model ones(size(model,1),1)];<br />h = inv(X'*X)*X'*data;<br />model = X*h;<br /><br /><a href="http://4.bp.blogspot.com/-asaUGDgJXz4/TrF6CW3_kEI/AAAAAAAAAwk/xze1GkEIYD4/s1600/007-coefficientofdetermination-003.png"><img border="0" src="http://4.bp.blogspot.com/-asaUGDgJXz4/TrF6CW3_kEI/AAAAAAAAAwk/xze1GkEIYD4/s1600/007-coefficientofdetermination-003.png" /></a><br /><br />% Aha, the R^2 value is now the same as the r^2 value.<br />% Thus, we see that r^2 is simply a special case of R^2<br />% where we allow an offset and gain to be applied<br />% to the model to match the data. After applying<br />% the offset and gain, r^2 is equivalent to R^2.<br /><br />% Note that because fitting an offset and gain will <br />% always reduce the residuals between the model and the data,<br />% r^2 will always be greater than or equal to R^2.<br /><br />% In some cases, you might have a model that predicts the same<br />% value for all data points. Such cases are not a problem for<br />% R^2 --- the calculation proceeds in exactly the same way as usual.<br />% However, note that such cases are ill-defined for r^2, because <br />% the variance of the model is 0 and division by 0 is not defined.<br /><br />% An R^2 value of 0% has a special meaning --- you achieve 0% R^2<br />% if your model predicts the mean of the data correctly and does<br />% nothing else. If your model gets the mean wrong, then you can<br />% quickly fall into negative R^2 values (i.e. your model is worse<br />% than a model that simply predicts the mean). Here is an example.<br />data = 5 + randn(50,1);<br />model = repmat(mean(data),[50 1]);<br />R2 = calccod(model,data);<br />figure(998); setfigurepos([100 100 600 220]); clf;<br />subplot(1,2,1); hold on;<br />h1 = plot(data,'k-');<br />h2 = plot(model,'r-');<br />axis([0 51 0 8]);<br />xlabel('Data point');<br />ylabel('Value');<br />legend([h1 h2],{'Data' 'Model'},'Location','South');<br />title(sprintf('R^2 = %.4g%%',R2));<br />subplot(1,2,2); hold on;<br />model = repmat(4.5,[50 1]);<br />R2 = calccod(model,data);<br />h1 = plot(data,'k-');<br />h2 = plot(model,'r-');<br />axis([0 51 0 8]);<br />xlabel('Data point');<br />ylabel('Value');<br />legend([h1 h2],{'Data' 'Model'},'Location','South');<br />title(sprintf('R^2 = %.4g%%',R2));<br /><br /><a href="http://2.bp.blogspot.com/-BEqdjKb2bJk/TrF6C_JNDxI/AAAAAAAAAws/ktMtrCpoVR8/s1600/007-coefficientofdetermination-004.png"><img border="0" src="http://2.bp.blogspot.com/-BEqdjKb2bJk/TrF6C_JNDxI/AAAAAAAAAws/ktMtrCpoVR8/s1600/007-coefficientofdetermination-004.png" /></a><br /><br />% Finally, note that R^2 is NOT symmetric (whereas r^2 is symmetric).<br />% The reason for this has to do with the gain issue. Let's see an example.<br />data = 5 + randn(50,1);<br />regressor = data + 4*randn(50,1);<br />X = [regressor ones(50,1)];<br />h = inv(X'*X)*X'*data;<br />model = X*h;<br />r2 = 100 * calccorrelation(model,data)^2;<br />figure(997); setfigurepos([100 100 600 220]); clf;<br />subplot(1,2,1); hold on;<br />R2 = calccod(model,data);<br />h1 = scatter(model,data,'b.');<br />axis square; axis([0 8 0 8]); axissquarify; axis([0 8 0 8]);<br />straightline(mean(data),'h','k-');<br />xlabel('Model');<br />ylabel('Data');<br />title(sprintf('R^2 of model predicting data = %.4g%%; r^2 = %.4g%%',R2,r2));<br />subplot(1,2,2); hold on;<br />R2 = calccod(data,model);<br />h2 = scatter(data,model,'b.');<br />axis square; axis([0 8 0 8]); axissquarify; axis([0 8 0 8]);<br />straightline(mean(data),'h','k-');<br />xlabel('Data');<br />ylabel('Model');<br />title(sprintf('R^2 of data predicting model = %.4g%%; r^2 = %.4g%%',R2,r2));<br /><br /><a href="http://2.bp.blogspot.com/-jAZxjAOY7Ys/TrF6DiDCg5I/AAAAAAAAAw0/0ubqZRYL_Lw/s1600/007-coefficientofdetermination-005.png"><img border="0" src="http://2.bp.blogspot.com/-jAZxjAOY7Ys/TrF6DiDCg5I/AAAAAAAAAw0/0ubqZRYL_Lw/s1600/007-coefficientofdetermination-005.png" /></a><br /><br />% On the left is the correct ordering where the model is attempting to<br />% predict the data. The variance in the data is represented by the scatter<br />% of the blue dots along the y-dimension. The green line is the model fit,<br />% and it does a modest job at explaining some of the variance along the<br />% y-dimension. It is useful to compare against the black line, which is the<br />% prediction of a baseline model that only and always predicts the mean of <br />% the data. On the right is the incorrect ordering where the data is<br />% attempting to predict the model. The variance in the model is represented<br />% by the scatter of the blue dots along the y-dimension. The green line is<br />% the model fit, and notice that it does a horrible job at explaining the<br />% variance along the y-dimension (and hence explains the very low R^2 value).<br />% The black line is much closer to the data points than the green line.<br />%<br />% Basically, what is happening is that the gain is correct in the case at <br />% the left but is incorrect in the case at the right. Because r^2 <br />% normalizes out the gain, the r^2 value is the same in both cases. <br />% But because R^2 is sensitive to gain, it give very different results.<br />% To fix the case at the right, the gain on the quantity being used for <br />% prediction (the x-axis) needs to be greatly reduced so as to better <br />% match the quantity being predicted (the y-axis).Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com3tag:blogger.com,1999:blog-6827056087436579459.post-59917395408044900272011-10-30T22:57:00.000-07:002011-11-02T10:20:20.404-07:00Correlation (r)Correlation is a basic and fundamental concept. By 'correlation' I refer to Pearson's product-moment correlation (r), which I find to be most useful, but there are other versions of correlation. In a nutshell, correlation is a single number ranging from -1 to 1 that summarizes how well one set of numbers predicts another set of numbers.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's start with a simple example.<br />data = randnmulti(100,[10 6],[1 .6; .6 1],[3 1]);<br />figure(999); clf; hold on;<br />scatter(data(:,1),data(:,2),'r.');<br />axis square; axis([0 20 0 20]);<br />set(gca,'XTick',0:1:20,'YTick',0:1:20);<br />xlabel('X'); ylabel('Y');<br />title('Data');<br /><br /><a href="http://2.bp.blogspot.com/-Y_lCsWZCnjg/Tq42xkB6TNI/AAAAAAAAAo0/12Gk_9IV2-A/s1600/006-correlation-001.png"><img border="0" src="http://2.bp.blogspot.com/-Y_lCsWZCnjg/Tq42xkB6TNI/AAAAAAAAAo0/12Gk_9IV2-A/s1600/006-correlation-001.png" /></a><br /><br />% To compute the correlation, let's first standardize each <br />% variable, i.e. subtract off the mean and divide by the <br />% standard deviation. This converts each variable into <br />% z-score units.<br />dataz = calczscore(data,1);<br />figure(998); clf; hold on;<br />scatter(dataz(:,1),dataz(:,2),'r.');<br />axis square; axis([-5 5 -5 5]);<br />set(gca,'XTick',-5:1:5,'YTick',-5:1:5);<br />xlabel('X (z-scored)'); ylabel('Y (z-scored)');<br /><br /><a href="http://4.bp.blogspot.com/-KkOIbZoljfU/Tq4210VH6WI/AAAAAAAAApE/0Mibjzep-0Y/s1600/006-correlation-002.png"><img border="0" src="http://4.bp.blogspot.com/-KkOIbZoljfU/Tq4210VH6WI/AAAAAAAAApE/0Mibjzep-0Y/s1600/006-correlation-002.png" /></a><br /><br />% We would like a single number that represents the relationship<br />% between X and Y. Points that lie in the upper-right or lower-left<br />% quadrants support a positive relationship between X and Y (i.e.<br />% higher on X is associated with higher on Y), whereas points that<br />% lie in the upper-left or lower-right quadrants support a negative<br />% relationship between X and Y (i.e. higher on X is associated with<br />% lower on Y).<br />set(straightline(0,'h','k-'),'Color',[.6 .6 .6]);<br />set(straightline(0,'v','k-'),'Color',[.6 .6 .6]);<br />text(4,4,'+','FontSize',48,'HorizontalAlignment','center');<br />text(-4,-4,'+','FontSize',48,'HorizontalAlignment','center');<br />text(4,-4,'-','FontSize',60,'HorizontalAlignment','center');<br />text(-4,4,'-','FontSize',60,'HorizontalAlignment','center');<br /><br /><a href="http://1.bp.blogspot.com/-nxOFHCtwFq8/Tq425Y-MKyI/AAAAAAAAApM/vyIsTVZvrS0/s1600/006-correlation-003.png"><img border="0" src="http://1.bp.blogspot.com/-nxOFHCtwFq8/Tq425Y-MKyI/AAAAAAAAApM/vyIsTVZvrS0/s1600/006-correlation-003.png" /></a><br /><br />% Let's calculate the "average" relationship. To do this, we compute<br />% the average product of the X and Y variables (in the z-scored space).<br />% The result is the correlation value.<br />r = mean(dataz(:,1) .* dataz(:,2));<br />title(sprintf('r = %.4g',r));<br /><br /><a href="http://3.bp.blogspot.com/-5NNRTsfrAnQ/Tq4270yroYI/AAAAAAAAApU/s26gzW26mN4/s1600/006-correlation-004.png"><img border="0" src="http://3.bp.blogspot.com/-5NNRTsfrAnQ/Tq4270yroYI/AAAAAAAAApU/s26gzW26mN4/s1600/006-correlation-004.png" /></a><br /><br />% Now, let's turn to a different (but in my opinion more useful) <br />% interpretation of correlation. Let's start with a simple example.<br />data = randnmulti(100,[10 6],[1 .8; .8 1],[3 1]);<br /><br />% For each set of values, subtract off the mean and scale the values such<br />% that the the values have unit length (i.e. the sum of the squares of the<br />% values is 1).<br />datanorm = unitlength(zeromean(data,1),1);<br /><br />% The idea is that each set of values represents a vector in a <br />% 100-dimensional space. After mean-subtraction and unit-length-normalization,<br />% the vectors lie on the unit sphere. The correlation is simply<br />% the dot-product between the two vectors. If the two vectors are<br />% very similar to one another, the dot-product will be high (close to 1);<br />% if the two vectors are not very similar to one another (think: randomly <br />% oriented), the dot-product will be low (close to 0); if the two vectors<br />% are very anti-similar to one another (think: pointing in opposite <br />% directions), the dot product will be very negative (close to -1).<br />% Let's visualize this for our example.<br /> % the idea here is to project the 100-dimensional space onto<br /> % two dimensions so that we can actually visualize the data.<br /> % the first dimension is simply the first vector.<br /> % the second dimension is the component of the second vector that<br /> % is orthogonal to the first vector.<br />dim1 = datanorm(:,1);<br />dim2 = unitlength(projectionmatrix(datanorm(:,1))*datanorm(:,2));<br /> % compute the coordinates of the two vectors in this reduced space.<br />c1 = datanorm(:,1)'*[dim1 dim2];<br />c2 = datanorm(:,2)'*[dim1 dim2];<br /> % make a figure<br />figure(997); clf; hold on;<br />axis square; axis([-1.2 1.2 -1.2 1.2]);<br />h1 = drawarrow([0 0; 0 0],[c1; c2],[],[],[],'LineWidth',4,'Color',[1 .8 .8]);<br />h2 = drawellipse(0,0,0,1,1,[],[],'k-');<br />h3 = plot([c2(1) c2(1)],[0 c2(2)],'k--');<br />h4 = plot([0 c2(1)],[0 0],'r-','LineWidth',4);<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />legend([h1(1) h2 h4],{'Data' 'Unit sphere' 'Projection'});<br />r = dot(datanorm(:,1),datanorm(:,2));<br />title(sprintf('r = %.4g',r));<br /><br /><a href="http://4.bp.blogspot.com/-vGVF62yTKOg/Tq42-EVOSaI/AAAAAAAAApc/C7X9iScH3O0/s1600/006-correlation-005.png"><img border="0" src="http://4.bp.blogspot.com/-vGVF62yTKOg/Tq42-EVOSaI/AAAAAAAAApc/C7X9iScH3O0/s1600/006-correlation-005.png" /></a><br /><br />% The vector interpretation lends itself readily to the concept<br />% of variance explained. Suppose that we are using one vector A <br />% to predict the other vector B in a linear regression sense. Then,<br />% the weight applied to vector A is equal to the correlation <br />% value. (This is because inv(A'*A)*A'*B = inv(1)*A'*B = A'*B = r.)<br />% Let's visualize this.<br />figure(996); clf; hold on;<br />axis square; axis([-1.2 1.2 -1.2 1.2]);<br />h1 = drawarrow([0 0],c2,[],[],[],'LineWidth',4,'Color',[1 0 0]);<br />h2 = drawarrow([0 0],r*c1,[],[],[],'LineWidth',4,'Color',[0 1 0]);<br />h3 = plot([r r],[0 c2(2)],'b-');<br />h4 = drawellipse(0,0,0,1,1,[],[],'k-');<br />text(.3,.38,'1');<br />text(.35,-.1,'r');<br />xlabel('Dimension 1');<br />ylabel('Dimension 2');<br />legend([h1 h2 h3 h4],{'Vector B' 'Vector A (scaled by r)' 'Residuals' 'Unit sphere'});<br /><br /><a href="http://2.bp.blogspot.com/-KULEt3wPAMw/Tq43AUv6_KI/AAAAAAAAApk/3zX7z6IFtV8/s1600/006-correlation-006.png"><img border="0" src="http://2.bp.blogspot.com/-KULEt3wPAMw/Tq43AUv6_KI/AAAAAAAAApk/3zX7z6IFtV8/s1600/006-correlation-006.png" /></a><br /><br />% So, we ask, how much variance in vector B is explained by vector A?<br />% The total variance in vector B is 1^2. By the Pythagorean Theorem,<br />% the total variance in the residuals is 1^2 - r^2. So, the fraction<br />% of variance that we do not explain is (1^2 - r^2) / 1^2, and<br />% so the fraction of variance that we do explain is 1 - (1^2 - r^2) / 1^2,<br />% which simplifies to r^2. (Note that here, in order to keep things simple,<br />% we have invoked a version of variance that omits the division by the number <br />% of data points. Under this version of variance, the variance associated<br />% with a zero-mean vector is simply the sum of the squares of the elements <br />% of the vector, which is equivalent to the square of the vector's length.<br />% Thus, there is a nice equivalence between variance and squared distance,<br />% which we have exploited for our interpretation.)<br /><br /><span style="font-size: large;"><b>FINAL REMARKS</b></span><br /><br />There is much more to be said about correlation; we have covered here only the basics.<br /><br />One important caveat to keep in mind is that <b>correlation is not sensitive to offset and gain</b>. This is because the mean of each set of numbers is subtracted off and because the scale of each set of numbers is normalized out. This has several consequences:<br /><ul><li>Correlation indicates how well <b>deviations relative to the mean</b> can be predicted. This is natural, since variance (normally defined) is also relative to the mean.</li><li>When using correlation as a index of how well one set of numbers predicts another set of numbers, you are <b>implicitly allowing scale and offset parameters</b> in your model. If you want to be completely parameter-free, you should instead use a metric like R^2 (which will be described in a later post).</li></ul>Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-83943633331704353952011-10-30T12:45:00.000-07:002011-10-30T12:49:27.208-07:00Randomization testsRandomization is a simple and intuitive method for establishing statistical significance. We provide here two examples in which a set of data is randomly shuffled multiple times, the results of which demonstrate that the original order of the data is in fact special.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Let's generate a time-series with an upward linear trend,<br />% corrupted by a large amount of noise.<br />x = (1:50) + 30*randn(1,50);<br /><br />% Visualize the time-series.<br />figure(999); clf;<br />plot(x);<br />xlabel('Time');<br />ylabel('Value');<br />title('Time-series data');<br /><br /><a href="http://3.bp.blogspot.com/-ttE04LmgwUY/Tq2oAaDNg2I/AAAAAAAAAY8/MQr7F4N9Tl4/s1600/005-randomization-001.png"><img border="0" src="http://3.bp.blogspot.com/-ttE04LmgwUY/Tq2oAaDNg2I/AAAAAAAAAY8/MQr7F4N9Tl4/s1600/005-randomization-001.png" /></a><br /><br />% Suppose we didn't know that the data came from a model<br />% with an upward trend. How can we test whether the observed<br />% trend is statistically significant?<br /><br />% Let's perform a randomization test. The logic is as follows:<br />% The null hypothesis is that the data points have no dependence on <br />% time. If the null hypothesis is true, then we can randomly permute <br />% the data points and this should produce datasets that are equivalent <br />% to the original dataset. So, we will generate random datasets, <br />% compute a statistic from these datasets, and compare these values to<br />% the statistic generated from the original dataset. Our statistic will<br />% be the correlation (Pearson's r) between the time-series and a diagonal line.<br />[pval,val,dist] = randomization(x,2,@(x) calccorrelation(x,1:length(x)),10000,1);<br /><br />% What we have done is to compute correlation values for 10000 randomly permuted<br />% datasets. Let's look at the distribution of correlation values obtained,<br />% and let's see where the original correlation value lies.<br />figure(998); clf; hold on;<br />hist(dist,100);<br />h = straightline(val,'v','r-');<br />xlabel('Correlation');<br />ylabel('Frequency');<br />legend(h,{'Actual correlation'});<br />title('Distribution of correlation values obtained via randomization');<br /><br /><a href="http://1.bp.blogspot.com/-QAyC-MrFIDo/Tq2oKrDU7lI/AAAAAAAAAZE/fH4DjrQbgvI/s1600/005-randomization-002.png"><img border="0" src="http://1.bp.blogspot.com/-QAyC-MrFIDo/Tq2oKrDU7lI/AAAAAAAAAZE/fH4DjrQbgvI/s1600/005-randomization-002.png" /></a><br /><br />% The random correlation values are most of the time less than the <br />% actual correlation value. The proportion of time that the actual<br />% correlation value is greater than the random values is the p-value.<br />ax = axis;<br />text(val,1.05*ax(4),sprintf('p = %.4g',pval),'HorizontalAlignment','center');<br />axis([ax(1:2) ax(3) 1.1*ax(4)]);<br /><br />% Given the unlikeliness of obtaining the correlation value that we did,<br />% we can conclude that the null hypothesis is probably false and that <br />% the data points probably do have a dependence on time. (Note that <br />% strictly-speaking, we cannot claim that the specific form of the<br />% time-dependence is a linear trend; we can claim only that there is <br />% some dependence on time.)<br /><br />% Let's do another example. Generate points in a two-dimensional<br />% space with a weak correlation between the x- and y-coordinates.<br />data = randnmulti(500,[],[1 .1; .1 1]);<br /><br />% Visualize the data.<br />figure(997); clf;<br />scatter(data(:,1),data(:,2),'r.');<br />xlabel('X');<br />ylabel('Y');<br />title(sprintf('Data (r = %.4g)',calccorrelation(data(:,1),data(:,2))));<br /><br /><a href="http://4.bp.blogspot.com/-DB8K40IxIho/Tq2oSTsFA1I/AAAAAAAAAZM/DG9jFVr_NpM/s1600/005-randomization-003.png"><img border="0" src="http://4.bp.blogspot.com/-DB8K40IxIho/Tq2oSTsFA1I/AAAAAAAAAZM/DG9jFVr_NpM/s1600/005-randomization-003.png" /></a><br /><br />% Let's perform a randomization test to see if there is a statistically<br />% significant relationship between the x- and y-coordinates. If there is<br />% no relationship, then we can randomly permute the x-coordinates<br />% relative to the y-coordinates and this should produce datasets <br />% that are equivalent to the original dataset. Our statistic of<br />% interest will be the correlation (Pearson's r) between x- and<br />% y-coordinates of each dataset.<br />[pval,val,dist] = randomization(data(:,1),1,@(x) calccorrelation(x,data(:,2)),10000,1);<br /><br />% What we have done is to compute correlation values for 10000 randomly permuted<br />% datasets. Let's look at the distribution of correlation values obtained,<br />% and let's see where the original correlation value lies.<br />figure(996); clf; hold on;<br />hist(dist,100);<br />h = straightline(val,'v','r-');<br />xlabel('Correlation');<br />ylabel('Frequency');<br />legend(h,{'Actual correlation'});<br />title('Distribution of correlation values obtained via randomization');<br />ax = axis;<br />text(val,1.05*ax(4),sprintf('p = %.4g',pval),'HorizontalAlignment','center');<br />axis([ax(1:2) ax(3) 1.1*ax(4)]);<br /><br /><a href="http://2.bp.blogspot.com/-_JAhIU-tR4Q/Tq2oZZqFw6I/AAAAAAAAAZU/wUBsiGkLPy8/s1600/005-randomization-004.png"><img border="0" src="http://2.bp.blogspot.com/-_JAhIU-tR4Q/Tq2oZZqFw6I/AAAAAAAAAZU/wUBsiGkLPy8/s1600/005-randomization-004.png" /></a><br /><br />% The random correlation values are most of the time less than the <br />% actual correlation value. The proportion of time that the actual<br />% correlation value is greater than the random values is the p-value.<br />% The null hypothesis (that there is no dependence between the x-<br />% and y-coordinates) is probably false.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-60447594558120116032011-10-27T08:13:00.000-07:002011-10-27T08:13:28.022-07:00Correlated regressorsIn ordinary least-squares linear regression, correlated regressors lead to unstable model parameter estimates. The intuition is that given two correlated regressors, it is difficult to determine how much of the data is due to one regressor and how much is due to the other. Let's look at this geometrically.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Generate two regressors (in the columns of the matrix).<br />% These two regressors are relatively uncorrelated (nearly orthogonal).<br />X = [10 1;<br /> 1 10];<br /><br />% Generate some data (no noise has been added yet).<br />data = [24 25]';<br /><br />% Simulate 100 measurements of the data (with noise added).<br />% For each measurement, estimate the weights on the regressors.<br />y = zeros(2,100);<br />h = zeros(2,100);<br />for rep=1:100<br /> y(:,rep) = data + 2*randn(2,1);<br /> h(:,rep) = inv(X'*X)*X'*y(:,rep);<br />end<br /><br />% Estimate weights on the regressors for the case of no noise.<br />htrue = inv(X'*X)*X'*data;<br /><br />% Now visualize the results<br />figure(999); clf; hold on;<br />h1 = scatter(y(1,:),y(2,:),'g.');<br />h2 = scatter(data(1),data(2),'k.');<br />axis square; axis([0 50 0 50]);<br />h3 = drawarrow(repmat([0 0],[2 1]),X','r-',[],10);<br />for p=1:size(X,2)<br /> h4 = scatter(X(1,p)*h(p,:),X(2,p)*h(p,:),25,'gx');<br /> h5 = scatter(X(1,p)*htrue(p),X(2,p)*htrue(p),'k.');<br />end<br />uistack(h3,'top');<br />h6 = drawarrow(X(:,1)',X(:,2)','b-',[],0);<br />xlabel('dimension 1');<br />ylabel('dimension 2');<br />legend([h1 h2 h3(1) h4 h5 h6], ...<br /> {'measured data' 'noiseless data' 'regressors' ...<br /> 'estimated weights' 'true weights' 'difference between regressors'});<br /><br /><a href="http://4.bp.blogspot.com/-rYiAjoLqnLU/Tqlz4-ndcBI/AAAAAAAAAYk/hyZ35aXrUnA/s1600/004-correlatedregressors-001.png"><img border="0" src="http://4.bp.blogspot.com/-rYiAjoLqnLU/Tqlz4-ndcBI/AAAAAAAAAYk/hyZ35aXrUnA/s1600/004-correlatedregressors-001.png" /></a><br />The green X's represent each regressor scaled by the weight estimated for that regressor in each of the 100 simulations. The X's are indicative of how reliably we can estimate the weights. In this example, the weights are estimated quite reliably (the spread of the X's is relatively small).<br /><br />% Let's repeat the simulation but now with<br />% two regressors that are highly correlated.<br />X = [6 5; <br /> 5 6];<br /><br /><a href="http://3.bp.blogspot.com/-ygfWS2LmzaA/Tql0C7faUPI/AAAAAAAAAYs/iW2LUAUwRok/s1600/004-correlatedregressors-002.png"><img border="0" src="http://3.bp.blogspot.com/-ygfWS2LmzaA/Tql0C7faUPI/AAAAAAAAAYs/iW2LUAUwRok/s1600/004-correlatedregressors-002.png" /></a><br />In this example, the two regressors are highly correlated and weight estimation is unreliable. To understand why this happens, examine the difference between the regressors. Notice that the difference between the regressors is quite small. Noise in the data shifts the data along this difference, giving rise to substantially different parameter estimates. For example, if the measured data shifts towards the upper-left, then this tends to produce high weights for the upper-left regressor (and low weights for the bottom-right regressor); if the measured data shifts towards the bottom-right, then this tends to produce high weights for the bottom-right regressor (and low weights for the upper-left regressor).<br /><br /><span style="font-size: large;"><b>OBSERVATIONS</b></span><br /><br />The stability of model parameter estimates is determined (in part) by the amount of noise in the direction of the regressor difference. If the projection of the noise onto the regressor difference has small variance (as in the first example), then parameter estimates will tend to be stable; if the projection has large variance (as in the second example), then parameter estimates will tend to be unstable.<br /><br />So how can we obtain better parameter estimates in the case of correlated regressors? One solution is to use regularization strategies (which will be described in a later post).Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-58612172496849648252011-10-25T12:39:00.000-07:002011-10-25T12:39:57.216-07:00Geometric interpretation of linear regressionLinear regression can be given a nice intuitive geometric interpretation.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Generate random data (just two data points)<br />y = 4+rand(2,1);<br /><br />% Generate a regressor.<br />x = .5+rand(2,1);<br /><br />% Our model is simply that the data can be explained, to some extent,<br />% by a scale factor times the regressor, e.g. y = x*c + n, where c is<br />% a free parameter and n represents the residuals. Normally, we <br />% want the best fit to the data, that is, we want the residuals to be <br />% as small as possible (in the sense that the sum of the squares of<br />% of the residuals is as small as possible). This can be given<br />% a nice geometric interpretation: the data is just a single point<br />% in a two-dimensional space, the regressor is a vector in this space,<br />% and we are trying to scale the vector to get as close to the data<br />% point as possible.<br />figure(999); clf; hold on;<br />h1 = scatter(y(1),y(2),'ko','filled');<br />axis square; axis([0 6 0 6]);<br />h2 = drawarrow([0 0],x','r-');<br />xlabel('dimension 1');<br />ylabel('dimension 2');<br /><br />% Let's estimate the weight on the regressor that minimizes the<br />% squared error with respect to the data.<br />c = inv(x'*x)*x'*y;<br /><br />% Now let's plot the model fit.<br />modelfit = x*c;<br />h3 = drawarrow([0 0],modelfit','g-');<br />uistack(h3,'bottom');<br /><br />% Calculate the residuals and show this pictorially.<br />residuals = y - modelfit;<br />h4 = drawarrow(modelfit',y','b-',[],0);<br />uistack(h4,'bottom');<br /><br />% Put a legend up<br />legend([h1 h2(1) h3 h4],{'data' 'regressor' 'model fit' 'residuals'});<br /><br /><a href="http://4.bp.blogspot.com/-6aseMXbJBWc/TqcP0B87XqI/AAAAAAAAAYM/1JRBytoGEFA/s1600/003-regressiongeometry-001.png"><img border="0" src="http://4.bp.blogspot.com/-6aseMXbJBWc/TqcP0B87XqI/AAAAAAAAAYM/1JRBytoGEFA/s1600/003-regressiongeometry-001.png" /></a><br /><br />% OK. Now's let's do an example for the case of two regressors.<br />% One important difference is that the model is now a weighted sum<br />% of the two regressors and so there are two free parameters.<br />y = 4+rand(2,1);<br />x = .5+rand(2,2);<br />figure(998); clf; hold on;<br />h1 = scatter(y(1),y(2),'ko','filled');<br />axis square; axis([0 6 0 6]);<br />h2 = drawarrow([0 0; 0 0],x','r-');<br />xlabel('dimension 1');<br />ylabel('dimension 2');<br />c = inv(x'*x)*x'*y;<br />modelfit = x*c;<br />h3 = drawarrow([0 0],modelfit','g-');<br />uistack(h3,'bottom');<br />residuals = y - modelfit;<br />h4 = drawarrow(modelfit',y','b-',[],0);<br />uistack(h4,'bottom');<br /><br />% Each regressor makes a contribution to the final model fit,<br />% so let's plot the individual contributions.<br />h3b = [];<br />for p=1:size(x,2)<br /> contribution = x(:,p)*c(p);<br /> h3b(p) = drawarrow([0 0],contribution','c-');<br />end<br />uistack(h3b,'bottom');<br /><br />% Finally, put a legend up<br />legend([h1 h2(1) h3b(1) h3 h4],{'data' 'regressors' 'contributions' 'model fit' 'residuals'});<br /><br /><a href="http://3.bp.blogspot.com/-TnXRQ4VS6MI/TqcP3uxrWPI/AAAAAAAAAYU/rFvPoYmB2Qw/s1600/003-regressiongeometry-002.png"><img border="0" src="http://3.bp.blogspot.com/-TnXRQ4VS6MI/TqcP3uxrWPI/AAAAAAAAAYU/rFvPoYmB2Qw/s1600/003-regressiongeometry-002.png" /></a><br /><br /><span style="font-size: large;"><b>OBSERVATIONS</b></span><br /><br />In the first example, notice that the model fit lies at a right angle (i.e. is orthogonal) to the residuals. This makes sense because the model fit is the point (on the line defined by the regressor) that is closest in a Euclidean sense to the data.<br /><br />In the second example, the model fits the data perfectly because there are as many regressors as there are data points (and the regressors are not collinear).Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-49640308670767558872011-10-23T22:17:00.000-07:002011-10-23T22:34:16.552-07:00Error in two dimensionsRegression normally attributes error to the dependent variable, but it is possible to fit regression models that attribute errors to both dependent and independent variables.<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% Generate some random data points.<br />x = randn(1,1000);<br />y = randn(1,1000);<br /><br />% Fit a line that minimizes the squared error between the y-coordinate of the data <br />% and the y-coordinate of the fitted line. Bootstrap to see the variability of the fitted line.<br />paramsA = bootstrp(20,@(a,b) polyfit(a,b,1),x',y');<br /><br />% Fit a line that minimizes the sum of the squared distances between the data points<br />% and the line. Bootstrap to see the variability of the fitted line.<br />paramsB = bootstrp(20,@fitline2derror,x',y');<br /><br />% Visualize the results.<br />figure(999); hold on;<br />scatter(x,y,'k.');<br />ax = axis;<br />for p=1:size(paramsA,1)<br /> h1 = plot(ax(1:2),polyval(paramsA(p,:),ax(1:2)),'r-');<br /> h2 = plot(ax(1:2),polyval(paramsB(p,:),ax(1:2)),'b-');<br />end<br />axis(ax);<br />xlabel('x');<br />ylabel('y');<br />title('Red minimizes squared error on y; blue minimizes squared error on both x and y');<br /><br /><a href="http://1.bp.blogspot.com/-WQBijqmF_y0/TqTzKjaoKBI/AAAAAAAAAX8/YE9c817TAHk/s1600/002-fitlinewithequalerror-001.png"><img border="0" src="http://1.bp.blogspot.com/-WQBijqmF_y0/TqTzKjaoKBI/AAAAAAAAAX8/YE9c817TAHk/s1600/002-fitlinewithequalerror-001.png" /></a><br /><br />% Now, let's repeat for a different dataset.<br />temp = randnmulti(1000,[],[1 .5; .5 1]);<br />x = temp(:,1)';<br />y = temp(:,2)';<br /><br /><a href="http://4.bp.blogspot.com/-azt6_AtqmGk/TqTzR83oesI/AAAAAAAAAYE/se3_t_ShIv8/s1600/002-fitlinewithequalerror-002.png"><img border="0" src="http://4.bp.blogspot.com/-azt6_AtqmGk/TqTzR83oesI/AAAAAAAAAYE/se3_t_ShIv8/s1600/002-fitlinewithequalerror-002.png" /></a><br /><br /><span style="font-size: large;"><b>OBSERVATIONS</b></span><br /><br />First example: When minimizing error on y, the fitted lines tend to be horizontal. This is because the best that the model can do is to basically predict the mean y-value regardless of what the x-value is. When minimizing error on both x and y, all lines are basically equally bad, giving rise to wildly different line fits across different bootstraps.<br /><br />Second example: In this example, minimizing error on y produces lines that are closer to horizontal than the lines produced by minimizing error on both x and y. Notice that the lines produced by minimizing error on both x and y are aligned with the intrinsic axes of the Gaussian cloud.Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0tag:blogger.com,1999:blog-6827056087436579459.post-15609295023158628292011-10-23T20:19:00.000-07:002011-10-23T22:34:33.908-07:00Error bar judgmentError bars are useful because they allow us to figure out how much of the data is signal and how much of the data is noise. We want to pay attention to aspects of the data that are real (i.e. outside of the error) and discount aspects of the data that are due to chance (i.e. within the error). Error bars that reflect +/- 1 standard error are surprisingly aggressive (see below).<br /><br /><b><span style="font-size: large;">CODE</span></b><br /><br />% We are going to measure 40 different conditions.<br />% For the first twenty conditions, the true signal will be 0.<br />% For the second twenty conditions, the true signal will be 1.<br />numconditions = 40;<br /><br />% We will make 30 different measurements for each condition.<br />n = 30;<br /><br />% Let's perform a simulation, visualize the results,<br />% and then do it again (ad nauseum).<br />while 1<br /><br /> % these are the true signal values<br /> signal = [zeros(1,numconditions/2) ones(1,numconditions/2)];<br /> <br /> % this is the noise (random Gaussian noise)<br /> noise = randn(n,numconditions);<br /> <br /> % these are the measurements<br /> measurement = bsxfun(@plus,signal,noise);<br /> <br /> % given the measurements, let's calculate the mean <br /> % and standard error for each condition.<br /> mn = mean(measurement,1);<br /> se = std(measurement,[],1)/sqrt(n);<br /><br /> % now, let's visualize the results<br /> figure(999); clf; hold on;<br /> bar(1:numconditions,mn);<br /> errorbar2(1:numconditions,mn,se,'v','r-','LineWidth',2);<br /> plot(1:numconditions,signal,'b-','LineWidth',2);<br /> axis([0 numconditions+1 -1 2]);<br /> title('Black is the mean; red is the standard error; blue is the true signal');<br /> pause;<br /><br />end<br /><br /><a href="http://1.bp.blogspot.com/-MX_UF74D82I/TqTdfxDlwFI/AAAAAAAAAX0/vVSP_BjiG-s/s1600/001-errorbarjudgment.png"><img border="0" src="http://1.bp.blogspot.com/-MX_UF74D82I/TqTdfxDlwFI/AAAAAAAAAX0/vVSP_BjiG-s/s1600/001-errorbarjudgment.png" /></a><br /><br /><span style="font-size: large;"><b>OBSERVATIONS</b></span><br /><br />If you use +/- 1 standard error to visualize results, it may subjectively look like there are differences, even though there aren't any. For this reason, it may be useful to instead plot error bars that reflect +/- 2 standard errors.<br /><br />It is quite common to find measurements that are several error bars away from the true value. Of course, this is completely expected given the nature of standard errors (e.g. 5% of the time, you will find a data point that is more than 2 standard errors away from the true value).Kendrick Kayhttp://www.blogger.com/profile/07569573850095805172noreply@blogger.com0