Facebook Twitter RSS Reset

Probability Error- Experimental errors, Type-I (1) error, Type-II(2)error, Type-III error,How to avoid

The probability of error may be considered as being the probability of making a wrong decision and which would have a different value for each type of error.



Whilst many will not have heard of Type I error or Type II error, most people will be familiar with the terms ’false positive’ and ’false negative’, mainly as a medical term

A patient might take an HIV test, promising a 99.9% accuracy rate. This means that 1 in every 1000 tests could give a ’false positive,’ informing a patient that they have the virus, when they do not.

Conversely, the test could also show a false negative reading, giving an HIV positive patient the all-clear.

This is why most medical tests require duplicate samples, to stack the odds up favorably. A one in one thousand chance becomes a 1 in 1 000 000 chance, if two independent samples are tested.

With any scientific process, there is no such ideal as total proof or total rejection, and researchers must, by necessity, work upon probabilities. That means that, whatever level of proof was reached, there is still the possibility that the results may be wrong.

This could take the form of a false rejection, or acceptance, of the null hypothesis.



A Type I error is often referred to as a ’false positive’, and is the process of incorrectly rejecting the null hypothesis in favour of the alternative.

The alternative hypothesis states that the patient does carry the virus. A Type I error would indicate that the patient has the virus when they do not, a false rejection of the null.


A Type II error is the opposite of a Type I error and is the false acceptance of the null hypothesis. A Type II error, also known as a false negative, would imply that the patient is free of HIV when they are not, a dangerous diagnosis.

With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is inferred from a non-rejected null. The Type I error is more serious, because you have wrongly rejected the null hypothesis.

Medicine, however, is one exception; telling a patient that they are free of disease, when they are not, is potentially dangerous.

How to avoid


This is the reason why scientific experiments must be replicable, and other scientists must be able to follow the exact methodology.

Even if the highest level of proof, where P < 0.01 (probability is less than 1%), is reached, out of every 100 experiments, there will be one false result. To a certain extent, duplicate or triplicate samples reduce the chance of error, but may still mask chance if the error causing variable is present in all samples.

If however, other researchers, using the same equipment, replicate the experiment and find that the results are the same, the chances of 5 or 10 experiments giving false results is unbelievably small. This is how science regulates, and minimizes, the potential for Type I and Type II errors.

Of course, in non-replicable experiments and medical diagnosis, replication is not always possible, so the possibility of Type I and II errors is always a factor.


Many statisticians are now adopting a third type of error, a type III, which is where the null hypothesis was rejected for the wrong reason.

In an experiment, a researcher might postulate a hypothesis and perform research. After analyzing the results statistically, the null is rejected.

The problem is, that there may be some relationship between the variables, but it could be for a different reason than stated in the hypothesis. An unknown process may underlie the relationship.


Both Type I errors and Type II errors are factors that every scientist and researcher must take into account.

Whilst replication can minimize the chances of an inaccurate result, this is one of the major reasons why research should be replicable.

Many scientists do not accept quasi-experiments, because they are difficult to replicate and analyze.