Jump to content

quickNir

Members
  • Posts

    5
  • Joined

Reputation

10 Good
  1. Intentionally funny, or just bad and not realizing it funny? In any case, you're welcome/I apologize.
  2. This would be the first boss on the Mandlorian FP. He has two dogs as well that are both champions. I read that the dogs have random aggro tables and attack all over. That's fine. But the main boss himself... I send in Khem Val to attack. My friend meanwhile is a DPS juggernaut trying to attack the two dogs. The moment I heal Khem Val I instantly get attacked. I haven't had this problem generally. What's going on?
  3. Good, I enjoyed it as well! I think your four options are a little too black in white, but I basically agree that 4 is the solution.
  4. So, ultimately I agree that the sample size is inadequate; I wrote as much in my post above. It's just the reasoning being provided that is incorrect. Why is the sample size too small? Because of our prior beliefs. This issue is being danced around (in Khevar's post as well) but not being made explicit. Khevar, the 99.7 is an arbitrary boundary. In fact, it's common in many areas of science to use 95% as the cut-off for significance. Pointing out that 3 more events would have brought it within the boundary is irrelevant. The fact is, the chance of getting such a low result given a rate of 20% is extremely small. The whole idea of the confidence interval is that it builds in information both from the size of the discrepancy to the proposed model, and the sample size. If a measurement lies outside the 99.7 percent confidence interval for a given hypothesis, that carries the same weight in disproving our hypothesis regardless of the sample size. I also do not know what you mean by saying that it proves that "HE" isn't getting 20%. Are you claiming that the OP somehow has RE generated at different probabilities? Clearly his results are universally applicable. That they don't support the conclusion is another story, and it's because of what you describe as "practical issues", which is in fact a very good name for it. Those practical issues amount to our beliefs about Bioware and their ability to program something in line with their claims. Let's get quantitative: if we use the Bayesian framework, we can estimate the relative likelihood of different success rates. Let's assume that initially we have no bias about what the success rate is. We can calculate the relative probability of any given success rate given the data. I assumed an experiment with 400 trials and 52 successes. The most likely success rate according to this is 13%. It turns out a success rate of 13% is about 1000 times more likely than a success rate of 20%. The question is this: before you saw this experiment, how many times more likely did you think a success rate of 20% (the quoted value) was than 13%? If you think it's more than 1000 times higher (I do), then this experiment should not change your mind. If you think that people at Bioware are incompetent and only had a 50-50 chance to get the correct rate, with the other 50 percent being distributed equally at every possible success rate, you would have thought that 20% is only about 100 times more likely than 13%, so you would find this experiment persuasive.
  5. First off, to anybody reading through this thread who doesn't have a strong background in probability and statistics, please do not assume anything written in this thread is correct. Some (a minority) of the people posting here know what they're talking about (like for instance the post just above mine), but there's a lot of junk here. So your best bet is really not to trust anything but read about this stuff somewhere else if it interests you. The testing technique in the original post is legitimate. I've seen tons of posts claiming that the sample size just isn't large enough, without actually responding to the numbers the OP put up. Most of those posts are simply incorrect. There is no fundamental sample size that you need. To find a discrepancy between a claimed hypothesis and reality, the size of the sample required depends on the size of the discrepancy and how certain you want to be. Sometimes 100 samples is enough. Suppose I have a coin that comes up heads 100% of the time. How long will it take you to determine my coin is rigged? Not very long. After it comes up heads even just 20 times in a row, you will be very suspicious. After 50 times it's virtually a certainty. If, on the other hand, I have a coin that comes up heads 51% of the time, it will take a very large number of samples to prove anything. To work out what these sample sizes are, there is no alternative except to crunch the numbers, which the original poster did. You start with the hypothesis that 20% is the probability of RE. You do the experiment. You see how likely it is that you would get the outcome you got or one more extreme, if the probability really were 20%. If this probability is low, you justify discarding the hypothesis that 20% is the true probability. Here is what (in my opinion) is missing from this discussion: Bayesian statistics. Some of the posters are correct in not criticizing the original methodology, but saying that it simply doesn't support the conclusions adequately. Why not? It seems like the confidence interval is pretty convincing. The reason is that 20% isn't just another number. It's the number given to us by the game. Since it's really easy to generate heads randomly (pseudo-randomly, technically) at a 20% level, I tend to suspect that Bioware did not screw this up. In other words I have prior beliefs about the likelihood that 20% is the RE rate, as opposed to other values. Suppose I flip a coin I find on the street 100 times. I get 75 heads. This is a wildly improbably result for a fair coin. Yet, I will not conclude that the coin I found on the street is unfair. Why? Because if you find a random coin on the street, it is many many many times more likely to be fair than unfair (note that when I say fair, I mean within a small tolerance of 50%, as real coins are). When you work out the math, it results in the final conclusion still being that the coin is likely fair. Because it is more likely that I found a fair coin and had an unusual sequence of flips, then that I found an unfair coin and had a usual sequence of flips. The same applies here. The evidence would be convincing if there was nothing special about 20%. But I have pretty strong prior beliefs about Bioware programmers being able to do something so simple correctly. In other words, despite the evidence, I think it is more likely that Bioware got this right and that your test was a fluke, then that Bioware screwed this up and your test is representative. So I will require much, much stronger evidence before I believe that 20% is not the true rate.
×
×
  • Create New...