Philosophical foundations of evidence-based medicine for veterinary clinicians

Mark A. Holmes Department of Veterinary Medicine, University of Cambridge, Cambridge, Cambridgeshire CB3 0ES, England.

Search for other papers by Mark A. Holmes in
Current site
Google Scholar
PubMed
Close
 MA, VetMB, PhD

Click on author name to view affiliation information

When clinicians make a decision about a patient, they use their skills as a clinician, consider the client's or patient's circumstances, and use what knowledge they have to make a rational decision. Clinical skills include a keen power of observation, the ability to take a comprehensive history, the ability to perform a thorough clinical examination, and a host of other practical skills that are often quite difficult to teach well. The client's and patient's circumstances may have a major influence on a clinical decision. For example, the approach to the resolution of an equine lameness problem is likely to be different when treating a champion dressage horse, compared with the approach when treating a Thoroughbred mare at stud. These first two aspects of clinical decision making are important, and we improve our performance as veterinarians in these areas as a result of experience and practice. The third aspect of clinical decision making is the recall and application of knowledge. Unless we are completely irrational, we judiciously select the information we use and are likely to rank any competing or conflicting information to arrive at the evidence that will help us deliver the best outcome.

Most of us at some stage in our careers are likely to encounter the legal or judicial system, which similarly attempts to determine the truth of the matter in hand. The process is analogous to the diagnostic process. Ideally, the guilty are convicted and the innocent acquitted; the courts successfully identify the true positives and the true negatives. However, we appreciate that there will be times when the truth is not determined, the guilty are acquitted (a false-negative result), or the innocent are convicted (a false-positive result). The likelihood of a correct verdict will be influenced by the quality of the evidence, how it is delivered, and the discerning powers of the judge or jury. Witnesses will contradict each other, and experts will disagree. Those making judgments will rank the evidence looking for authoritative or objective facts or opinions upon which to base their decisions and thus arrive at the truth.

The Distinction Between Belief and Truth

How do we as veterinarians arrive at the truth? Our college lecturers, the authors of textbooks, and speakers at conferences have no reason to mislead us; so why do we need to look further or consider the scientific philosophy and methods that enable us to rank the information that we use? The simple answer is that we all make mistakes (even the experts) and that at times the experts don't agree, and so we need to make our own judgments. Another reason is that as the quantity of knowledge from clinical research increases exponentially, we have considerable difficulty keeping all that knowledge updated in our memories or delivering that knowledge to our clients or patients.

Beliefs held for quite rational and understandable reasons sometimes turn out to be wrong when scrutinized.

When the author was a veterinary student in the 1980s, there was still a widely held belief that the use of oxytetracycline in horses was contraindicated because it was likely to cause a potentially fatal colitis. This belief appears to have arisen from two reports of severe colitis in horses given large doses of oxytetracycline (15 g given to three horses1 and 2 g given to two horses2,3). Although it is now appreciated that heavy dosing of any broad-spectrum antmicrobial may cause clostridial overgrowth and lead to colitis, oxytetracycline was singled out as being responsible. Although a 15-g dose is six times the dose that one would normally use, a 2-g dose is within the usual therapeutic range for an adult horse; the conclusion was understandable. It was only after graduation that the author discovered many equine practitioners continued to use oxytetracycline, taking advantage of its excellent tissue penetration, spectrum of activity, and thus its value in both adult horses and foals. However, throughout the 1980s, authors of textbooks and review articles continued to suggest that oxytetracycline should not be used. Anecdotal evidence for its use did occasionally appear in print,4 but the authors failed to address the dogma that originated from relatively flimsy evidence.5 It isn't that the authors of the original papers were wrong—they simply reported their observations—but the generalization to the entire equine population was inappropriate.

Scientific Method

If truth were intrinsically self-evident, we would not need a judicial system and we would not need to use judgment when making clinical decisions. So how do we arrive at the truth? For those of us who believe in the scientific basis of medicine, we pursue a philosophy or methodology that is widely applied in all scientific disciplines.

The contemporary educational approach in the United Kingdom is to introduce science by encouraging children to make predictions on the basis of prior observations. The sort of exercise that might be used would be to ask, “What do snails eat?” The children might suggest lettuce, grass, dead leaves, or cheese. The teacher might put all these potential feedstuffs into a vivarium with some snails and ask the children to predict what will happen. Hopefully, one of the children will predict that the lettuce will disappear as it gets eaten, and if the teacher is lucky, the snails will consume the lettuce. In this way, the children are introduced to observation, generating a hypothesis, and testing it, which are the keystones of science.

One of the key elements of scientific method is the concept of falsifiability expounded by the philosopher Karl Popper in the 1920s. Popper believed that no empirical hypothesis, proposition, or theory can be considered scientific if it does not allow the possibility of a contrary case or a contradicting observation. A hypothesis that all swans are white could be proven wrong by the simple test of observing a black swan. This has two consequences. The first is that if a hypothesis cannot be tested or a test cannot be envisaged, then it is not scientific. The second is that we can never achieve a scientific certainty; we can only state that the hypothesis has not failed any tests performed so far. By employing the scientific method, we may not be able to see into the future but we can have some confidence in the observations we have made in the past.

Skepticism

Although it may seem a contradiction, we also open our minds by being skeptics. When we obtain the evidence to support the hypothesis, we question the evidence and ask ourselves whether there is any other possible explanation that could account for our observations. Occasionally, great minds are able to advance our understanding by overturning apparently rational dogma through a new interpretation of the evidence. At a more mundane level, we search for sources of bias or subjectivity that cast doubt on the conclusions.

One human clinical example that should make us think is a very personal story from a distinguished medical practitioner and researcher. Iain Chalmers bought a copy of Dr. Benjamin Spock's famous book Baby and Child Care when he was a recently graduated physician in the mid-1960s. In his copy, he marked a passage that read: “There are two disadvantages to a baby's sleeping on his back. If he vomits, he's more likely to choke on the vomitus. Also he tends to keep his head turned towards the same side… I think it is preferable to accustom a baby to sleeping on his stomach from the start.”

As an obstetrician, Chalmers passed on and acted on this apparently rational and authoritative advice in Spock's book. Three decades later, collation of the results of many simple observational studies contradicted this advice, culminating in a recommendation issued by the American Academy of Pediatrics in 1992 that infants be placed on their backs to reduce the risk of sudden infant death syndrome (SIDS). Prior to this advice, more than 70% of US infants were placed on their fronts to sleep. By the year 2000, about 80% of infants slept on their backs and the incidence of SIDS had decreased by more than 40% (from 1.4 deaths/1,000 births to 0.8 deaths/1,000 births; Figure 1).6 In a letter to the editor of the British Medical Journal, Chalmers wrote, “We now know that the advice promulgated so successfully in Spock's book led to thousands, if not tens of thousands, of avoidable cot deaths.”7

Figure 1—
Figure 1—

Sudden infant death syndrome rate in the United States from the National Center for Health Statistics data and prone-positioning rate from National Institute for Child Health and Human Development surveys over the years 1987 to 1998. The American Academy of Pediatrics recommendation against prone sleeping was made in 1992, and the “Back to Sleep” campaign was begun in 1994.6

Citation: Journal of the American Veterinary Medical Association 235, 9; 10.2460/javma.235.9.1035

Although it is highly likely that the vast majority of beliefs and opinions propounded by experts are true, there is no way of determining their truth when they have not been tested. Similarly, when two experts disagree, we cannot rank their opinions without evidence from the results of well-conducted clinical research.

Translating Results from Research on Populations to Individual Patients

There are many criticisms of scientific medicine. Some of them are probably justifiable and should be addressed, whereas others should be vigorously defended against. Among these is the criticism that those who practice evidence-based medicine only consider the population and not the individual. Although evidence is collected from populations, it is applied with judgment by the clinician.

Consider the distribution of rectal temperatures in a population of healthy horses (Figure 2). We consider 38.4°C (101.1°F) to be the normal rectal temperature of a healthy horse,8 but we wouldn't expect every healthy horse to have a temperature of exactly 38.4°C. The rectal temperature of most horses will fall in a range of 38.0° to 39.0°C (100.4° to 102.2°F), which is approximately equivalent to the mean value ± 3 SD, thereby representing about 97.5% of horses; however, a small number of perfectly healthy horses will have their internal thermostats set to an unusually high temperature. A population of horses with a viral respiratory infection may have a wide distribution of rectal temperatures (Figure 3). The majority will have a temperature above 39.0°C, and we might set this as the threshold above which we consider disease to be present. However, there will be a small number of infected horses whose rectal temperature remains lower than this threshold. These animals may be patently ill, with other signs of disease such as coughing and nasal discharge. We accept that 38.4°C is a useful reference value for healthy horses, but common sense prepares us to not to be surprised if we obtain a temperature of 39.0°C, and we do not fail to diagnose a problem in a horse with other signs of infectious respiratory disease just because the patient's temperature falls within the reference range. As such, we take a holistic approach to diagnosis and consider all the evidence.

Figure 2—
Figure 2—

Distribution of rectal temperatures of healthy horses based upon a mean ± SD temperature of 38.4 ± 0.22°C (101.1 ± 0.58°F) as reported by Refinetti and Piccione.8 The range indicated by the dashed horizontal line represents approximately 68% of temperatures. The range indicated by the dotted line represents approximately 95% of temperatures.

Citation: Journal of the American Veterinary Medical Association 235, 9; 10.2460/javma.235.9.1035

Figure 3—
Figure 3—

Distribution of rectal temperatures of a hypothetical population of horses with a viral respiratory infection. The range indicated by the dotted horizontal line represents the range for 95% of rectal temperatures of healthy horses.

Citation: Journal of the American Veterinary Medical Association 235, 9; 10.2460/javma.235.9.1035

It is not strictly true to say that the rectal temperature of all healthy horses lies in the range of 38.0° to 39.0°C, although when the temperatures of 99 out of 100 horses from a sample of healthy horses fall within this range, this is a reasonable or pragmatic conclusion.

If we return to the SIDS example, we can conclude that the general truth is that babies are less likely to die when they are placed on their backs to sleep. It would be wrong to conclude that there will be no deaths caused by a baby sleeping on its back, vomiting, and choking on the vomit. The parents of such a child may not obtain any solace from the fact that more deaths were prevented in the population as a whole when that population followed their doctors' advice on sleeping positions. Further research may identify additional risk factors that suggest that particular children might be better placed in a prone position to sleep, which would provide a more precise and generalizable hypothesis concerning SIDS.

Why is Study Design so Important?

At some stage in our professional lives, we find ourselves reading a journal article or textbook or possibly listening to a talk, and we learn something that contradicts a previously held belief. In usual circumstances, we evaluate the new information, the reasons we ought to believe it or dismiss it, and move on. The new information may result from new research or may be a useful observation from an experienced colleague; it may accord with our existing beliefs, or it may run contrary to them. In any event, we make a rational decision by assigning a greater or lesser weight to the evidence supporting our old belief or supporting a change in belief. In other words, there is a hierarchy of evidence.

Within clinical research, there is a similar hierarchy of evidence by which the results of studies conducted by use of different methods can be ranked. It should come as no surprise that evidence arising from a randomized controlled trial involving 200 subjects would be considered stronger evidence than a case series involving 3 subjects. This hierarchy of evidence is akin to a pyramid (Figure 4). At the top of the pyramid are systematic reviews, in which various studies are methodically and systematically compared. Systematic reviews may contain a meta-analysis in which the results of many comparable studies are collated into a single statistical superanalysis. Well-conducted randomized controlled trials are considered to provide better evidence than cohort studies and case-control studies.

Figure 4—
Figure 4—

Relative strengths of evidence provided by different methods used in clinical research illustrated diagrammatically in the so-called pyramid of evidence. Strength of evidence increases from the base to the peak of the pyramid.

Citation: Journal of the American Veterinary Medical Association 235, 9; 10.2460/javma.235.9.1035

In veterinary research, we can expose experimental subjects to the pathogen or other risk factor of interest in a challenge study. This type of study is not ethically acceptable in human medicine, but in veterinary clinical research, it allows for the control of many potential confounders, although it relies on exposure that may not mimic natural exposure. Evidence from studies involving subjects in a regular clinical setting may provide the closest match for the population of animals under the care of most veterinarians.

At the bottom of the hierarchy, we place evidence from case reports or anecdotal information, which may contain the truth, but the reader is offered very little in the way of scientific evidence to support the results provided.

Why are Statistics so Important?

Although many of us appreciate the difference between belief and scientific knowledge, it is an intrinsic human failing that we are poor at judging risk or uncertainty. Our estimates are influenced by our hopes, fears, and experience. The chance of winning the big prize in the UK national lottery is approximately 1 in 14 million. This is a very small number, and for all intents and purposes, it is zero. One might rationalize the purchase of a single ticket as a way to provide some entertainment when watching the draw on TV; however, the purchase of two tickets is irrational. Although buying a second ticket will double one's chance of winning, it is still such a small number that it is still effectively zero. If you do not purchase a ticket, it is a certainty that you cannot win. Purchasing the first ticket will provide a hope of winning; purchasing a second ticket is the triumph of hope over statistical expectation. Gambling on a roulette table is a similar form of entertainment that is guaranteed to enrich the casino. Although the odds are fair for the numbers 1 to 36, the fact that there is a 37th number (zero) on which the casino always wins guarantees that eventually the odds are stacked against the player. Although there are forms of gambling in which a player's skill can enable him or her to profit, it is nonetheless a fact that in many forms of gambling, the hope of winning distorts our perception of risk.

The treatment of our patients will often contain an element of risk, but this should be a well-considered and understood risk and not a gamble based on hope. A basic understanding and application of statistics is the only way that we can make objective estimates of risk. Recent veterinary graduates are often influenced by the cases they have seen in university clinics or referral hospitals and have to be reminded that the frequency of diagnoses (the prevalence of disease) that they have witnessed during their training may not be representative of general veterinary practice. A phrase that is often used is “When you hear the sound of hooves, try to think ‘horses’ rather than ‘zebras.’”

Consider the example of a professor that addresses 31 students in a class and offers to bet $10 that two of the students in the room share the same birthday (day and month). Given that there are 365 days in the year and only 31 students, this might seem like a rash bet. To people not familiar with this statistical phenomenon, their instinct is that the chance of any two students having the same birthday is certainly < 0.5 (1 in 2). In fact, the actual chance of this happening is 0.73 (approx 3 in 4), which means that for every four times the professor makes this bet (with a new class of students of course), the lecturer will win three times. The mathematics of this example are available elsewhere.9

It doesn't require a leap of the imagination to consider that a researcher could examine the results of a clinical trial that revealed a large difference between the treated and nontreated animals and think that, with such obvious results, a statistical analysis is unnecessary. Sadly, statistical analysis is always necessary; the greatest danger we face is the temptation to decide that the difference that is observed is so great that it could not possibly have happened by chance (or that the difference is so small that there is no difference). The only way we can make objective judgments about risk and probability is by using statistics.

The complexity of a clinical trial may require skilled statistical consideration. This represents a problem for researchers on the one hand but an even bigger problem to veterinarians on the other. It is unreasonable to expect every practitioner to be able to detect the inappropriate use of statistics, and so we must rely on the quality of peer review. Many journals now use a specialized statistical reviewer or editor to check the statistics included in reports. There is a hackneyed saying in the statistical world that if you torture the numbers long enough, they'll eventually cough up a result. So just because authors report a statistically significant result, it doesn't make it particularly valid; it doesn't even guarantee that it is true. At the minimum level that most journals accept as indicating a significant difference (P < 0.05), it means that if the experiment were repeated 20 times and there were truly no effect to be detected, then on one occasion you might expect to see a significant difference reported just as a result of natural variation or coincidence. In other words, the reported phenomenon may just be a 1 in 20 chance event.

Conclusion

In recent years, it has become apparent that the public perception of science has become tarnished by the social problems associated with rapid technological progress. However, as much as this author believes that good science is the best path to obtaining valid and relevant information to improve clinical decision making in veterinary medicine and surgery, it may not be the only path to truth in all areas of human endeavor. Epistemology is a philosophical discipline concerning the theory of knowledge, particularly with regard to its methods, validity, and scope, and the distinction between justified belief and opinion. Most high school and college science courses introduce the basic concepts that guard against making incorrect conclusions from observations and experiments, but these ideas are often forgotten when students progress to their vocational veterinary training because of the enormous volume of new information that they need to learn to qualify as a veterinarian. The reason we should pause and consider epistemology, and more specifically, formal scientific methodology, is that it lies at the heart of evidence-based veterinary medicine. Whether we advocate this approach or are skeptical about its value, it helps to understand how we personally arrive at truth and whether scientific method has a role to play.

Many veterinarians would probably admit to being empiricists and positivists (as far as veterinary medicine is concerned), even though these may not be labels we use every day. An empiricist in this context refers to the belief that knowledge should be based on observations or sensory perception (generally, we count or measure the phenomenon of interest). We consciously or unconsciously take note when animals respond to treatments, confirming that our diagnosis and treatment choices were correct. We look for similar evidence when new treatments or diagnostic methods are suggested to us. Positivists tend to believe that all truth can be arrived at through adhering to scientific methods. Although it may be largely accurate to say that positivism holds for much of the truth for which we search, it would be wrong to be dogmatic (indeed, it would be unscientific to close our minds to alternative philosophical approaches). Having said that, is there a better alternative to the use of a well-designed randomized controlled trial to determine whether one of two treatments for a particular disease is better? There are many deep sociological, ideological, and philosophical questions for which the randomized controlled trial is not the answer, but for the time being, simple quantitative empirical scientific research is, if not the best approach for finding the truths in veterinary practice, certainly the least worst.

A final point to make is that when clinicians use a scientific approach to their clinical decision making, it provides the clearest indication of where the research deficits are. Inevitably, academics concentrate on research work that interests them, is able to attract funding, and is likely to be published. As a result, the research undertaken in universities and research institutes may not reflect the needs of veterinary practice. Having said that, those of us who undertake research are always looking for answerable questions to provide projects for students and residents and are often looking to forge links with practitioners who often have much better access to the population of animals in which we are interested. Practitioners searching for good-quality scientific evidence and failing to find it can contact colleagues in universities directly or indirectly through letters to journals or postings on veterinary information Web sites. They might also bring these information needs to the attention of grant-awarding bodies whose mission is to address the need of improving veterinary practice. An introductory textbook on the practice of evidence-based veterinary medicine was published in 2003.10

This issue of the JAVMA includes the first report in a new feature that was designed to provide examples of evidence-based decision making in veterinary practice. Most of us find that we learn best by practicing the skill we are trying to acquire. Similarly, it is often best to illustrate how decision making should be performed by providing examples that represent likely clinical scenarios. It is hoped that, as the feature develops, it will include reports covering a wide range of veterinary disciplines and health-care situations to show how evidence-based veterinary medicine can be practiced.

References

  • 1.

    Andersson G, Ekman L, Månsson I, et al. Lethal complications following administration of oxytetracycline in the horse. Nord Vet Med 1971;23:922.

    • Search Google Scholar
    • Export Citation
  • 2.

    Cook W. Diarrhoea in the horse associated with stress and tetracycline therapy. Vet Rec 1973;93:1517.

  • 3.

    MacKellar JC, Vaughan SM, Smith RJ, et al. Diarrhoea in horses following tetracycline therapy (lett). Vet Rec 1973;93:593.

  • 4.

    Jansen ML. Oxytetracycline by injection for horses (lett). N Z Vet J 1988;36:101102.

  • 5.

    Whitlock RH. Colitis: differential diagnosis and treatment. Equine Vet J 1986;18:278283.

  • 6.

    American Academy of Pediatrics Task Force on Infant Positioning and SIDS. Positioning and SIDS: changing concepts of sudden infant death syndrome: implications for infant sleeping environment and sleep position. Pediatrics 1992;89:11201126.

    • Search Google Scholar
    • Export Citation
  • 7.

    Chalmers I. Invalid health information is potentially lethal (lett). BMJ 2001;322:998.

  • 8.

    Refinetti R, Piccione G. Intra- and inter-individual variability in the circadian rhythm of body temperatures of rats, squirrels, dogs, and horses. J Therm Biol 2005;30:139146.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • 9.

    Susan Holmes. Birthday problem 10/6. Available at: www-stat.stanford.edu/∼susan/courses/s116/node50.html. Accessed Jul 6, 2009.

  • 10.

    Cockcroft PD, Holmes MA. Handbook of evidence-based veterinary medicine. Oxford, England: Blackwell Publishing, 2003.

All Time Past Year Past 30 Days
Abstract Views 130 0 0
Full Text Views 3437 3259 240
PDF Downloads 309 120 6
Advertisement