How did you calculate the standard deviation?

Maybe I am misunderstanding you. The three sigma rule says that in a normal distribution approximately 99.7% of the values will fall within three standard deviations (three sigmas) of the mean, so I assume that is what you are talking about.

You performed a test involving four-hundred-and-some rolls, and you had a success rate of 13.4%. If you are trying to claim that this is so far outside the parameters of an expected distribution that we should conclude the system is broken, you need to know how much variance we expect there to be in a distribution of trials with four-hundred-and-some rolls to begin with. How did you go about calculating that?

By the by, unless I am totally missing the point this kind of confidence interval calculation does not seem to be a very good way to make an argument about probability.