**Guest Post by Jon**

In this post we demonstrate the amount of inflation obtained by this strategy. We sample from a distribution with a zero mean and test whether the sample mean differs from 0. As we will see, if we continually peek at the data, and then decide whether to continue data collection contingent on the partial results, we wind up with an elevated chance of rejecting the null hypothesis.

CODE

% Simulate 1000 experiments with 200 data points each

x = randn(200,1000);

% We expect about 5% false positives, given an alpha of 0.05

disp(mean(ttest(x, 0, 0.05)));

% Now let's calculate the rate of false positives for different sample

% sizes. We assume a minimum of 10 samples and a maximum of 200.

h = zeros(size(x));

for ii = 10:200

h(ii,:) = ttest(x(1:ii, :), 0, 0.05);

end

% The chance of a false positive is about 0.05, no matter how many data points

figure(99);

plot(1:200, mean(h, 2), 'r-', 'LineWidth', 2)

ylim([0 1])

ylabel('Probability of a false positive')

xlabel('Number of samples')

% How would the false positive rate change if we peeked at the data?

% To simulate peeking, we take the cumulative sum of h values for each

% simulation. The result of this is that if at any point we reject the null

% (h=1), the remaining points for that simulation also assume we rejected

% null.

peekingH = logical(cumsum(h));

figure(99); hold on

plot(1:200, mean(peekingH, 2), 'k-', 'LineWidth', 2)

legend('No peeking', 'Peeking')

% The plot demonstrates the problem with peeking: we defined the likelihood

% of a false positive as our alpha value (here, 0.05), but we have created

% a false positive rate that is much higher.

Very interesting... This is a bit like the issue of multiple comparisons, except that instead of having several different samples that you are testing for an effect, you are testing the same sample multiple times (albeit the sample grows in size).

ReplyDeleteI have never seen an instance of a paper describing how an experiment was actually conducted (e.g. we tried a pilot, then tweaked this or that, and then collected a few more data points, etc.); rather, papers tend to summarize and streamline the messy experiment into a nice clean one. There is of course good reason for this, that is, to make the paper simpler. But as this post shows, this process in some cases can be highly misleading and statistically invalid.

I should have noted in the original post: I got this idea from a discussion with Nick Davidenko many years ago. -Jon

ReplyDeletekeep visit

ReplyDeleteIndividual must be instructed else they are not be the piece of our general public we should build up a sense in our childhood to accomplish something and furthermore participate in creating of our nation a nation must be set up by our childhood and we have http://www.dissertationdataanalysis.com/our-professional-dissertation-statistical-services/ for all individual.

ReplyDeleteThe idea of becoming a good writer is really a nice idea. But it needs a lot of effort and struggle. As a writer of English http://www.spsshelp.org/services/ i must say writing is not an easy task. We just need much concentrate for working in this field.

ReplyDeleteWhat is this Peeking at P-values. I want to know more about this. I can see some code. And some graph as picture. I want to read more. I hope you will share this in your next post.

ReplyDeleteGreat information.

ReplyDeleteThis is very informative blog about data analysis. I got lots of helpful article to analysis my data. This is also a great post about peeking at P-value and provided code for this analysis. Also you can visit this website to get more data analysis help from our services. Thanks for sharing really useful information.

ReplyDelete

ReplyDeletePeople are really concerned about the residency fact after they are just complete their med school and they really want to stay such a place when they can continue their further study.look here that I love for the helpful information and its really amazing.