Facebook Twitter RSS Reset

Evidence based practice – reliability levels, recommendations, limitations

Reliability of Evidence based practice according to levels

Level 1 (likely reliable) Evidence – representing research results addressing clinical outcomes and meeting an extensive set of quality criteria.

Level 2 (mid-level) Evidence – representing research results addressing clinical outcomes, and using some method of scientific investigation, but not meeting the quality criteria to achieve level 1 evidence labeling.

Level 3 (lacking direct) Evidence – representing reports that are not based on scientific analysis of clinical outcomes. Examples include case series, case reports, expert opinion, and conclusions extrapolated indirectly from scientific studies.

Categories of recommendations

In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit of the service and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses:

  • Level A: Good scientific evidence suggests that the benefits of the clinical service substantially outweigh the potential risks. Clinicians should discuss the service with eligible patients.
  • Level B: At least fair scientific evidence suggests that the benefits of the clinical service outweighs the potential risks. Clinicians should discuss the service with eligible patients.
  • Level C: At least fair scientific evidence suggests that there are benefits provided by the clinical service, but the balance between benefits and risks are too close for making general recommendations. Clinicians need not offer it unless there are individual considerations.
  • Level D: At least fair scientific evidence suggests that the risks of the clinical service outweighs potential benefits. Clinicians should not routinely offer the service to asymptomatic patients.
  • Level I: Scientific evidence is lacking, of poor quality, or conflicting, such that the risk versus benefit balance cannot be assessed. Clinicians should help patients understand the uncertainty surrounding the clinical service.


Although evidence-based medicine is becoming regarded as the “gold standard ” for clinical practice there are a number of limitations and criticisms of its use.


In some cases, such as in open-heart surgery, conducting randomized, placebo-controlled trials is commonly considered to be unethical, although observational studies may address these problems to some degree.


The types of trials considered “gold standard” (i.e. large randomized double-blind placebo-controlled trials) are expensive, so that funding sources play a role in what gets investigated. For example, public authorities may tend to fund preventive medicine studies to improve public health, while pharmaceutical companies fund studies intended to demonstrate the efficacy and safety of particular drugs.


The conduction of a randomized controlled trial takes several years until being published, thus data is restricted from the medical community for long years and may be of less relevance at time of publication.


Furthermore, evidence-based guidelines do not remove the problem of extrapolation to different populations or longer time frames. Even if several top-quality studies are available, questions always remain about how far, and to which populations, their results are “generalizable”. Furthermore, skepticism about results may always be extended to areas not explicitly covered: for example, a drug may influence a “secondary endpoint” such as test result (blood pressure, glucose, or cholesterol levels) without having the power to show that it decreases overall mortality or morbidity in a population.

The quality of studies performed varies, making it difficult to compare them and generalize about the results.

Certain groups have been historically under-researched (racial minorities and people with many co-morbid diseases), and thus the literature is sparse in areas that do not allow for generalizing.

Publication bias

It is recognised that not all evidence is made accessible, that this can limit the effectiveness of any approach, and that efforts to reduce publication bias and retrieval bias is required.

Failure to publish negative trials is the most obvious gap, and moves to register all trials at the outset, and then to pursue their results, are underway. Changes in publication methods, particularly related to the Web, should reduce the difficulty of obtaining publication for a paper on a trial that concludes it did not prove anything new, including its starting hypothesis.

Treatment effectiveness reported from clinical studies may be higher than that achieved in later routine clinical practice due to the closer patient monitoring during trials that leads to much higher compliance rates.

The studies that are published in medical journals may not be representative of all the studies that are completed on a given topic (published and unpublished) or may be unreliable due to conflicts of interest. Thus the array of evidence available on particular therapies may not be well-represented in the literature. A 2004 statement by the International Committee of Medical Journal Editors (that they will refuse to publish clinical trial results if the trial was not recorded publicly at its outset) may help with this, although this has not yet been implemented.

Illegitimacy of other types of medical reports

Although has some usefulness in clinical practice, the case report is being suspended from most of the top-ranked medical literature. Thus data of rare medical situations, in which large randomized double-blind placebo-controlled trials can not be conducted, may be rejected from publication and be restricted from the medical community.

Political criticism

There is a good deal of criticism of evidence based medicine, which is suspected of being — as against what the phrase suggests — in essence a tool not so much for medical science as for health managers, who want to introduce managerial techniques into medical administration. Thus Dr Michael Fitzpatrick writes: “To some of its critics, in its disparagement of theory and its crude number-crunching, EBM marks a return to ’empiricist quackery’ in medical practice . Its main appeal, as Singh and Ernst suggest, is to health economists, policymakers and managers, to whom it appears useful for measuring performance and rationing resources.”