This function is also incorrect. Again, basic logic disproves it: Given a higher number of trials and a constant probability, the chance of an event occurring at least once increases.
My function reflects this property. See plot: http://www.wolframalpha.com/input/?i=plot+y+%3D+0.4595^9%28-%28%281+-+0.4595%29^x+-+1%29%29. Yours is logical at face value, but you're dramatically over-counting due to the fact that an event chain p_1...p_10 overlaps with an event chain of p_2...p_11 by 9 events. That's why the probabilities you're churning out are so much higher.
The base case was wrong, as you said. I generally work with zero-based functions, so slip of habit. Correcting this gives the following function:
f(i) = 0.4595^9(-((1 - 0.4595)^i - 1))
f(185) = 0.000913208
Thus, it actually makes no difference within the bounds of rounding error. If I had let it expand, it would have produced a slightly larger value than previously, but only slightly.
I can get an experimental idea of the probability by using a program like:
maxStreak = count;
hits = 0;
Console.Write("Probability: "(count/i) "\nMax Streak: " maxStreak);
I'll run it next time I've got access to my programming suite.
I'll run this myself tomorrow. 10 trillion is a lot of iterations, but we're talking about very, very small odds. It seems like the margin for error will be fairly high.