How did you calculate the standard deviation?

Maybe I am misunderstanding you. The three sigma rule says that in a normal distribution approximately 99.7% of the values will fall within three standard deviations (three sigmas) of the mean, so I assume that is what you are talking about.

You performed a test involving four-hundred-and-some rolls, and you had a success rate of 13.4%. If you are trying to claim that this is so far outside the parameters of an expected distribution that we should conclude the system is broken, you need to know how much variance we expect there to be in a distribution of trials with four-hundred-and-some rolls to begin with. How did you go about calculating that?

By the by, unless I am totally missing the point this kind of confidence interval calculation does not seem to be a very good way to make an argument about probability.

Actually the confidence interval calculation is what I us at work to validate math models with test results for products we build. At my job we say that if a test data falls outside the 90 percent confidence interval we say that it fails to validate the model. In this case the programers are telling us 20 percent is the outcome we should see. As for how it is computed I used what we have used at work it also agrees with with my college text and I see similar things on wikipedia as well.

As for the people that are complain that this is just a RNG "thing" the point of a confidence interval test is to define a band of what kind of results you can expect to see from a set of sample tests that are all independent from one another.