The volume of veterinary information is rapidly expanding, and with each passing day, the knowledge gained during one's veterinary school education becomes increasingly outdated. Veterinary practitioners face challenges when trying to build on existing clinical knowledge with additional information from the scientific literature to make better clinical decisions. Although veterinary medicine is built on scientific principles, incorporation of information from the scientific literature into clinical decision making is challenging for both new graduates and experienced practitioners. One of the largest obstacles is the amount of time required to accurately assess scientific information; however, a systematic approach to literature review enhances the usefulness of information gleaned from the literature and allows efficient use of one's time.1,2
At some point in their careers, veterinarians may question the emphasis on the use of scientific literature rather than clinical observations and experience to support clinical decision making. They may also consider that reading scientific literature and reviewing new information takes a considerable amount of time, and they may question whether the time investment is worth any benefit that might come with information gained beyond their clinical experience.
Clinical observations play an important role in veterinary decision making, and practitioners should not discount the value of clinical observations as a source of knowledge. In fact, clinical experiences form the framework through which all literature is interpreted. Although knowledge gained through experience can be useful, when such knowledge is used as the sole source of information, practitioners can fail to account for potential biases in their perception and interpretation of events, biological variation among animals, and the complex nature of the biological systems with which they deal on a daily basis. Reliance on clinical observations and experience also slows the adaption of evolving information and advances in diagnostic testing, therapeutic interventions, and medical technologies. When new ideas are introduced to the medical community, scientific literature serves as a medium through which practitioners can learn new things without having to do so independently. Clear, concise reports of well-designed studies speed the process of information acquisition and dissemination, and practitioners can use their clinical experience to incorporate this information in the optimum manner to serve their clients.
Limitations of Biological Observations
Causal associations can be accurately identified when an initiating cause and outcome are easily detected, there are no interactions involving multiple factors, and the cause and outcome are closely spaced in time. However, when a causative factor cannot be identified by the senses alone, when multiple causes interact to bring about an outcome, or when causes are separated from outcomes by long periods, erroneous conclusions can be made about causation. Most cause- and-effect relationships in veterinary medicine are of this nature. For example, for an animal with diarrhea of bacterial or viral origin, the causative organism is not grossly visible and practitioners typically lack the technical expertise or equipment needed to detect the cell damage that it caused. Whereas diagnostic tests can be used to identify specific pathogens, the lag period between pathogen exposure (cause) and the development of clinical signs (effect) could impact interpretation in situations in which a virus causes damage that allows bacterial overgrowth, which subsequently leads to clinical signs. Effects of pathogens can also be influenced by host characteristics such as age, sex, immune status, or genetics, and these characteristics can interact to further influence the likelihood and severity of disease. In complex situations such as these, attempts to identify the effectiveness of individual prevention or treatment strategies on the basis of clinical observations and experience alone would likely lead to erroneous conclusions. A combination of clinical experience and information from controlled studies on a specific topic is more likely to lead to the identification of optimum treatment and control strategies than is either of these elements alone.
Advancements in disease prevention and treatment were slow or nonexistent before specific strategies were developed to control bias (systematic error) and rigorously test hypotheses through the use of statistical tests.3–5 From the 1st century through the end of the 19th century, the dominant understanding of disease causation in the Middle East, China, India, and Europe was miasma theory—the theory that disease was caused by fetid air.6–8 This primitive understanding existed despite rapid developments in other sciences such as engineering, astronomy, and navigation, which began from the time historical records were first kept and continued through the industrial revolution.9,10
The reasons that advancements in the medical sciences lagged behind those in other sciences for hundreds of years are the same reasons that veterinarians must embrace the scientific method and use the information from rigorous, well-designed studies that control for factors that might bias the results. The lag was not attributable to apathy, inadequate education, or poor communication among thought leaders; otherwise, progress in the other sciences would have been slow as well. Rather, the lag can be explained by the fundamental characteristics of biological science, which differ from those of physics, chemistry, and other sciences. Interpretation of the effects of interventions on a biological system is impeded by intra- and interindividual variation, bias, and complexity. When a mechanical object or system breaks or fails to perform as expected, the cause and effect can be accurately identified. In that situation, human intervention to correct the problem is the only change exerted on the system, and simple studies involving small numbers of subjects would yield important practical experience and information. However, when a biological system (eg, an animal, a microbe, or a population) fails to perform as expected and a particular intervention is provided, many unknown and unseen factors also become involved. Therefore, any change in that biological system is not necessarily attributable to that particular intervention. For example, when an intervention is applied to a group of animals near the end of an outbreak of a contagious disease (by which point the number of susceptible animals has become depleted), the outbreak will end regardless of any intervention; thus, that intervention will appear to have been successful.
Biological systems involve complex homeostatic mechanisms; remarkable abilities to adapt to deficiencies in various factors, remove insults (eg, infectious agents, toxicants, or damaged tissue), and repair damage; and multifaceted interactions among host, insult, and environmental factors to mediate the extent of protection or damage. All of these make biological systems infinitely more complex than even the most complicated machines.
Differences in complexity between biological and mechanical systems are important when considering the continuous exposure people have to mechanical systems and the ways in which that exposure influences our thought processes. Although modern automobiles are complicated mechanical systems, one could gather the knowledge of a group of experts to learn everything there is to know about the workings of each component and the manner in which those components interact. In contrast, one could gather the knowledge of everyone in the world and still not fully understand the workings of an animal. This difference in degree of understanding is important when thinking about protecting, diagnosing, and repairing an animal versus an automobile. Careful observation, system knowledge, and previous experience lead to accurate assessments of cause and effect in mechanical systems, and because people (including veterinarians) are accustomed to making such causal inferences, it is important to recognize that these same skills and attributes often fail when attempting to make causal inferences for the much more complex biological and medical sciences.
Acknowledging and Controlling for Bias
Veterinarians observe daily the variation within and among animals and biological systems. Awareness of the range of behaviors, physiologic indices, and growth and reproductive performance indicators for healthy and compromised animals is important for problem solving. Animals treated identically can differ in their responses, whether the treatment affects growth, pain tolerance, response to anesthesia, immunologic response to vaccination, or other outcomes. Differentiating the expected variation in responses from that attributable to the effects of an intervention requires careful data collection and probability calculations (ie, statistical tests).
Well-designed scientific studies also differ from clinical observation and experience because, by design, explicit constraints are implemented to control for sources of bias. The word bias is commonly used to describe prejudice or imbalance in perception that predisposes a person to assess a situation in a particular light, and this use is similar to the way in which the word is used when describing scientific studies. Clinical experience is particularly prone to bias because the same person provides and then evaluates interventions. This does not imply that a clinician is intentionally influencing results. Rather, when bias is not specifically controlled for, conclusions can be influenced by factors other than the particular intervention. Common, often inadvertent biases associated with clinical observations are typically grouped into the categories of selection bias, information bias, and confounding.
Selection bias exists when animals differ among study groups in more ways than just the intervention or putative risk factor assessed. In clinical practice, veterinarians routinely use information about a patient's signalment, history, comorbid conditions, and other variables to develop diagnostic and treatment plans. Although clinically reasonable, this approach to decision making introduces selection bias, which prevents accurate comparisons among interventions or other factors of interest. For example, selection bias may occur when a veterinarian is more likely to administer the perceived best treatment to patients with the most severe clinical signs, which could result in an underestimation of the response to treatment because those patients would presumably be more difficult to treat than patients with less severe signs or in overestimation because those patients might show the greatest clinical response. Recognition of and controlling for selection bias when making clinical observations or decisions is challenging; however, this type of bias can be controlled for in scientific studies to improve the validity of comparisons.
Information bias is also common, given that veterinarians rarely have full and equal information about the animals for which they provide care. This type of bias can occur when certain animals or groups of animals are observed more closely, with different observation or monitoring methods, or for a longer period than other animals or groups of animals because of characteristics such as breed, age, housing conditions, severity of clinical signs, or convenience. In other words, information bias exists when the degree of scrutiny differs among animals or groups. In clinical practice, such bias can easily occur when comparing the effects of a new intervention with those of a more traditional intervention because animals receiving the new intervention may be more closely observed than those receiving the more traditional (and thus previously evaluated) intervention. This difference in scrutiny can contribute to an incorrect understanding of relationships between interventions and outcomes.
Confounding can occur when 2 factors are associated with each other but not evenly distributed among the subjects evaluated, making it difficult to identify which factor is truly associated with the outcome of interest. Because of the aforementioned complexity of biological systems, confounding is a common problem when clinical observations are used to make assumptions about disease causation or treatment effectiveness. For example, consider a scenario of overweight cats living in an apartment. These 2 characteristics are confounded in that the proportion of overweight cats in the general population differs among housing types, with overweight cats overrepresented among apartment dwellers.11 Therefore, any study designed to evaluate the effect of housing type or obesity on health outcomes for cats must include a means of accounting for the potential confounding relationship between these factors; otherwise, it would be impossible to determine which factor was truly associated with an outcome.
Well-designed scientific studies involve use of several techniques to control for bias, including random selection of subjects that meet the study inclusion criteria, random allocation of subjects to treatment groups, and blinding of outcome assessors with regard to the treatment groups to which subjects were assigned. However, even randomized controlled trials retain some degree of bias, and that bias can be substantial when techniques to control for it are not rigorously applied. For example, a study12 in which the methodological quality of 250 controlled trials from 33 meta-analyses was evaluated revealed that studies in which the method of blinding was not clearly indicated or was used inadequately yielded findings that exaggerated the effectiveness of interventions by a mean of 30% to 40%. Another study13 was conducted to evaluate all reports of studies represented as randomized controlled trials of therapeutic interventions that were published in a single human medical journal over a 2-year period. That study revealed an even greater impact of failure to effectively blind outcome assessors, with treatment effectiveness exaggerated by approximately 70% when assessors were not (vs were) blinded. Similar examples exist in veterinary medicine. A systematic review14 of trials designed to evaluate the efficacy of vaccination for the prevention of contagious conjunctivitis (pinkeye) in cattle revealed that trials for which blinding of investigators and random allocation of cattle to treatment groups were not reported were more likely to yield conclusions that vaccination was efficacious, compared with conclusions of trials for which these bias-control techniques were reported. The type of bias associated with subjectively measured outcomes, such as those in the aforementioned examples (eg, treatment success or failure), is referred to as optimism error or wish bias, by which unblinded assessors identify fewer treatment failures than do blinded assessors, particularly during assessment of treatment groups versus control groups.15 This does not mean that the researchers were intentionally deceitful, but by human nature, unblinded outcome assessors may consciously or subconsciously have perceived a difference among treatments when none actually existed because they believed their study hypothesis was true, which resulted in misclassification of outcomes.
Another drawback of clinical observation and experience that is addressed in well-designed scientific studies is a limited ability to detect and quantify multiple relationships between ≥ 2 factors that affect clinically important outcomes. Accurate assessments are more likely to be made in veterinary practice when a single intervention has a direct effect on an important outcome and no other factors influence that effect. However, in addition to the influences of bias and biological variation on validity of clinical observations, health outcomes are rarely influenced by a single factor. Factors that may interact with each other to affect disease onset or recovery include, among others, age; sex; breed; genetics; concurrent exposure to other infectious, toxic, or metabolic insults; nutritional status; and physiologic stress. The interpretative challenge is that the effect of each factor may not be constant and interrelationships among factors may alter the likelihood of disease. For example, sex and body weight are risk factors for death that interact for calves arriving at a feedlot.16 Male calves with a lower body weight at the time of feedlot arrival have a higher risk of death because of an increased risk of developing respiratory disease, compared with the risk for males calves with higher body weight, whereas female calves with a higher body weight have a higher risk of death than do lower-weight females because of an increased risk of reproductive problems. In that scenario, both sex and body weight influence the risk of death, yet the effect of one factor on the outcome is influenced by the other factor. To make interpretation even more difficult, these and other interactions usually involve many variables, making it essentially impossible to accurately identify the precise variables and interrelationships that contribute to disease onset or recovery and to extrapolate clinical observations from one setting to another.
Common Types of Studies Reported in the Veterinary Literature
Although many factors that affect health and disease of animals are difficult or impossible to detect, interventions and clinically important outcomes (eg, recovery from disease, longevity, or growth rate) are readily measurable, and those outcomes typically occur within a short enough period after the intervention to preclude the influence of other factors that might occur or develop between intervention and outcome. Collection of data in an accurate and unbiased manner followed by unbiased analysis of that data are foundations of the scientific method and provide the opportunity to make more accurate inferences about veterinary interventions than would be achieved through clinical observation and experience.
The scientific method of problem investigation begins before data collection, with assertion of a clear hypothesis that can be tested (supported or rejected) on the basis of the data to be collected (Figure 1). A well-stated hypothesis will predict a measurable outcome and will be stated in a manner to ensure that a result that supported the stated hypothesis would not also support alternate hypotheses. This contrasts with reliance on clinical observations, for which a presumption of cause and effect is made only after the outcome has been observed. Development of a clear hypothesis prior to data collection is necessary to protect against bias that could occur when data are collected or hypotheses are generated after the outcome is known.
Expertise in study design and statistical analysis is not needed for veterinary practitioners to evaluate the quality of information provided by scientific studies, but they do need to understand the strengths and limitations of various study designs. Similarly, veterinarians do not need to know exactly how radiographic, ultrasonographic, or other diagnostic equipment works, but they should know how to correctly interpret the output of that equipment. The types of studies reported in the veterinary literature can be generally classified as systematic reviews, meta-analyses, randomized controlled trials (including clinical or field trials and laboratory-based trials), cohort studies, cross-sectional studies, case-control studies, case series, and case reports (Table 1). Systematic reviews involve a rigorous and clearly defined process to provide summaries of previous studies on a specific topic and typically provide a more broad-based conclusion than for a single trial. Randomized controlled trials are an example of experimental studies in which investigators assign subjects in an unbiased manner to receive the intervention or exposure being investigated or a control intervention (usually a placebo, sham treatment, or existing treatment). In contrast, in observational studies (cohort, case-control, and cross-sectional designs), investigators draw inferences about the effects of an intervention or putative risk factor in a clinically relevant, natural setting through observation of the subjects, without assigning them to receive particular interventions or undergo certain exposures. In general, experimental designs such as randomized controlled trials provide the greatest control of bias and confounding but, in doing so, yield results that may not be directly applicable to natural clinical situations. In contrast, observational studies, in general, yield results that are clinically applicable but lack rigorous control of factors that could bias or confound the results.
Systematic reviews and meta-analyses—Well-designed systematic reviews can provide the strongest evidence about clinically important questions because they synthesize the results of multiple studies. Such reviews are designed to answer a focused question, which leads to a systematic search of the literature with a critical review of the identified scientific studies in a transparent process that can be evaluated and repeated by others. The key characteristics of a systematic review include a clearly stated set of objectives; an explicit, reproducible method for the literature search, including specific eligibility criteria for that literature; a search method designed to identify all reports of studies that meet the eligibility criteria; an assessment of the validity of the findings of included studies (eg, thorough assessment of risk of bias); and systematic synthesis and reporting of the characteristics and findings of the included studies.17 Meta-analysis, another method of research synthesis, is designed to provide a statistical summary of the combined results of similar studies.
Common types of study designs used to provide information to support clinical decision making in veterinary practice, in descending order of evidentiary strength.
Study type | Strengths | Limitations |
---|---|---|
Systematic review or meta-analysis | Involves use of the full body of literature to estimate the direction and magnitude of effect of potential risk factors, protective factors, and interventions | Restricted to combining studies with a similar design and duration of follow-up (meta-analysis) |
Provides analytical methods to estimate magnitude of effect and degree of certainty | Findings subject to publication bias | |
Randomized controlled trial | Allows good control for bias and confounding Provides the strongest evidence for cause-and-effect relationships | Involves restrictive study population and environment that may limit generalizability of results to real-word settings |
Allows testing of few variables | ||
Often involves brief follow-up period between interventions and outcome assessment Involves experimentally induced (not naturally occurring) disease in some situations | ||
Retrospective or prospective cohort study | Involves use of real-world population and enviroment | Can be prone to selecion bias, information bias, and confounding |
Able to test many variables | Can be expensive to conduct prospectively because of the long follow-up period between factors or exposures and outcomes | |
Able to test for uncommon risk factors | ||
Allows prolonged follow-up period between factors or exposures and outcomes | Subjects can be lost to follow-up when conducted prospectively | |
Case-control study | Involves use of subjects from real-world populations and enviroments | Highly prone to selction bias, information bias, and confounding |
Can be used to investigate risk factors for uncommon diseases while involving few animals | Does not provide evidence for causation because relative timing of the occurrence of risk factors and outcomes is unclear | |
Less expensive to conduct than randomized controlled trials and prospective cohort studies | ||
Can generate questions that can be investigated by use of study types with better control for bias and confounding | ||
Cross-sectional study | Involves use of subjects from real-world populations and environments | Can be prone to selection bias, information bias, and confounding |
Typically inexpensive to conduct | Does not provide evidence for causation because relative timing of risk factors and outcomes is unclear | |
Results can generate questions that can be investigated through use of study types with better control for bias and confounding | ||
Case report or case series | Simple and inexpensive to conduct | Descriptive account of a clinical experience |
Results can generate questions that can be investigated by use of study types with better control for bias and confounding | Does not control for biological variation, bias, confounding, or interactions among factors |
Although systematic reviews and meta-analyses are becoming more common in veterinary medicine than they were in the past, they have limitations.18 Investigators who use these approaches must determine the types of studies that are appropriate to include, while considering the inherent strengths and weaknesses of various other study designs.19 In addition, investigators must consider that the amount of literature available on a given topic can be influenced by publication bias, whereby only some studies are reported (typically those in which an effect is identified for an intervention) and studies in which there were no effects or unexpected effects of interventions are not reported. The effect of publication bias is generally to overestimate the magnitude of effect of risk factors, protective factors, and interventions.17,20 The compelling strength of a well-designed and executed systematic review is that the synthesis of results from numerous studies yields the best available estimate of the true magnitude of benefit or harm of a risk factor reported in a single article. In addition, when a systematic review results in identification of studies with inconsistent results regarding the direction (eg, greater vs lower risk) and magnitude of an association, a careful evaluation of the specific studies may provide valuable hypotheses for potential interactions that need to be evaluated.
Randomized controlled trials—Well-designed randomized controlled trials provide rigorous control for many types of bias that might arise when testing the effect of an intervention. This type of study design provides strong evidence that an intervention is involved in the causation of or recovery from disease because the intervention can be shown to precede the outcome. Randomized controlled trials are commonly used to evaluate the efficacy or effectiveness of a new vaccine or treatment for preventing or treating specific diseases. Because such studies are designed to evaluate a single factor, steps are taken to carefully define the population of study subjects eligible for enrollment and ensure the results are applicable to similar populations over similar time frames. Thus, randomized controlled trials are limited for assessment of prognosis or economic outcomes, particularly for settings that differ from the population, time frame, and environment used in that study.
Cohort studies—Of the observational study designs, well-designed cohort studies provide the strongest evidence for use in clinical decision making. Cohort studies are designed to evaluate the effect of a risk factor or intervention on an outcome of interest. Prospective cohort studies start with the collection of data from groups of subjects that lack disease but differ in some characteristic of interest (ie, an exposure) at the time they enter the study. Then, data regarding the outcomes of interest are collected over time. In contrast, retrospective cohort studies typically involve use of a database or medical records to provide information on exposures and, many times, outcomes that was collected before the study was actually conducted. Retrospective studies are efficient when a hypothesized risk factor or exposure is uncommon, and they can be completed in a short period at lower expense than prospective studies, but they lack assurance that data for all study subjects were collected in exactly the same manner. The cohort design allows better control for bias than would be possible with clinical observations alone, but selection and information bias or confounding cannot be controlled as well in cohort studies as in randomized controlled trials. To their advantage, cohort studies are typically performed over longer periods and with populations and environments that are more representative of those encountered in clinical practice than would be possible in randomized controlled trials. Cohort studies are also appropriate for evaluation of long-term prognosis and can be particularly informative for investigation of associations between uncommon factors and outcomes.
Case-control studies—Whereas cohort studies involve selection of subjects on the basis of their exposure to a certain factor, case-control studies involve selection on the basis of outcome or disease status. For case-control studies, subjects with and without the outcome of interest are identified and their history of exposure to specific factors is compared between the 2 groups. These studies can be conducted in a retrospective manner (eg, by use of medical records to identify animals with and without the outcome) or on a prospective basis (eg, enrollment of cases and controls as animals develop the outcome, with subsequent collection of historical information for each group). Compared with other study types, case-control studies can be performed more rapidly and less expensively. The design provides some protection against bias, but case-control studies are more prone to selection and information bias or confounding than are randomized controlled trials, cohort studies, and even cross-sectional studies. Case-control studies can be the most appropriate study design for investigations involving uncommon outcomes.
Cross-sectional studies—Cross-sectional studies typically require less time and fewer resources to complete than do other study types because they involve concurrent collection of data on putative risk factors or exposures and outcomes of interest. Associations identified between exposures and outcomes can provide an estimate of the production or economic impacts of diseases or interventions. Although clinicians cannot make conclusions about cause and effect with this design because the relative timing of exposures and outcomes is often unclear (one cannot determine which came first), cross-sectional studies yield information about associations that may be investigated further through the use of more rigorous study designs.
Case reports and case series—Case reports and case series are common in the veterinary literature and are essentially a description of a clinical experience or series of similar clinical experiences involving ≥ 1 animal. These types of studies are not controlled in that they do not involve comparison of characteristics or intervention outcomes of case animals with those of control animals; only case animals are included in case reports and case series, and the findings are purely descriptive. In addition, these studies have the same limitations as undocumented clinical experiences—namely, lack of control for bias and confounding, limited generalizability of findings, and no valid means by which to identify factors that may interact to influence observed outcomes. Findings of case reports and case series serve to generate hypotheses that can be tested in studies designed to test causal inferences through the use of techniques to control for bias and to extrapolate findings to other clinically relevant populations.
Limitations of Well-Designed Studies
Any factor that has a large impact (magnitude of effect) on disease risk, prevention, or recovery in animals can be easily detected in studies that may yield the correct conclusion despite a failure to account for bias, intra- and interindividual variation, or interactions among variables. However, most such factors were identified long ago, leaving veterinary researchers to investigate factors that have small magnitudes of effect and that are difficult to accurately identify given the complexity and constraints of biological observations.
Despite the advantages of scientific studies over clinical observation and experience alone in providing information for decision making, the scientific method has limitations. First, studies must be well designed to account for variation and control for bias or they lose many of their advantages. Therefore, ranking of scientific findings involves more than just consideration of study type, and study validity should be evaluated prior to drawing inferences from published results. In addition, studies can only be designed to effectively investigate a few simple 2-way interactions (eg, between sex and age). More complicated interactions involving many variables (such as an interaction among sex, age, and reproductive status) undoubtedly exist in biological systems but can be difficult to accurately evaluate with a single controlled study. Studies with the greatest control of bias and confounding (randomized controlled trials) are often conducted in highly artificial environments; when they are conducted in clinically relevant environments, variation of the animals and environments used is purposefully restricted, thereby limiting the scope of clinical situations to which the findings can be directly applied. On the other hand, observational studies usually involve selection of subjects from populations comparable with at least some practice settings, but at the sacrifice of strict control for bias and confounding. Regardless of the approach used to test a hypothesis or clinical question, research efforts are constrained by cost, space, and time. These constraints lead inexorably to limitations in study results, regardless of the design used, and therefore to a need for veterinary practitioners to use critical thinking and careful assessment of the reliability of reported results.21
Clinical Summary
Use of clinical observation and experience as the sole source of information for clinical decision making resulted in a profound lack of advancement in prevention and treatment of diseases for many centuries. However, development of specific strategies to separate treatment effects from random biological variation, reduce or eliminate the effects of bias, account for confounding factors, and identify and quantify interactions among factors associated with outcomes has led to unprecedented advancements in medical knowledge since the first half of the 20th century. To avoid the same ineffective means for problem solving that plagued medical professionals throughout history, practicing veterinarians must value and incorporate into their decision-making processes the results of rigorous research studies reported in the scientific literature.
References
1. Bahtsevani C, Uden G, Willman A. Outcomes of evidence-based clinical practice guidelines: a systematic review. Int J Technol Assess Health Care 2004; 20: 427–433.
2. Bright TJ, Wong A, Dhurhati R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med 2012; 157: 29–43.
3. Fisher RA. Statistical methods for research workers. Edinburgh, Scotland: Oliver and Boyd, 1925.
4. Fisher RA. The design of experiments. Edinburgh: Oliver and Boyd, 1935.
5. Mann HB. Analysis and design of experiments. New York: Dover Publications, 1949.
6. Last JM. Miasma theory. In: Breslow L, ed. Encyclopedia of public health. Vol. 3. New York: Macmillan Reference, 2001; 765.
7. Zhu JP. Exploration of the relationship between geographical environmental and human diseases in ancient China. J Tradit Chin Med 2011; 31: 382–385.
8. Karamanou M, Panayiotakopoulos G, Tsoucalas G, et al. From miasmas to germs: a historical approach to theories of infectious disease transmission. Infez Med 2012; 20: 58–62.
9. Dahnke MD, Dreher HM. The scientific revolution. In: Philosophy of science for nursing practice: concepts and application. New York: Springer Publishing Co, 2011; 87–94.
10. Marks in the evolution of western thinking about nature. Available at: www.sciencetimeline.net. Accessed Apr 10, 2015.
11. Scarlett JM, Donoghue S, Saidla J, et al. Overweight cats: prevalence and risk factors. Int J Obes Relat Metab Disord 1994;18 (suppl 1):S22–S28.
12. Schulz KF, Chalmers I, Hayes RJ, et al. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995; 273: 408–412.
13. Poolman RW, Struijs PA, Krips R, et al. Reporting outcomes in orthopaedic randomized trials: does blinding of outcome assessors matter? J Bone Joint Surg Am 2007; 89: 550–558.
14. Burns MJ, O'Connor AM. Assessment of methodologic quality and sources of variation in the magnitude of vaccine efficacy: a systematic review of studies from 1960 to 2005 reporting immunization with Moraxella bovis vaccines in young cattle. Vaccine 2008; 26: 144–152.
15. Hróbjartsson A, Thomsen AS, Emanuelsson F, et al. Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors. BMJ 2012; 344:e1119.
16. Babcock AH, Cernicchiaro N, White BJ, et al. A multivariable assessment quantifying effects of cohort-level factors associated with combined mortality and culling risk in cohorts of US commercial feedlot cattle. Prev Vet Med 2013; 108: 38–46.
17. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 2009; 62:e1–e34.
18. Sargeant JM, Torrence ME, Rajic A, et al. Methodological quality assessment of review articles evaluating interventions to improve microbial food safety. Foodborne Pathog Dis 2006; 3: 447–456.
19. Hatala R, Keitz S, Wyer P, et al. Tips for learners of evidence-based medicine: 4. Assessing heterogeneity of primary studies in systematic reviews and whether to combine their results. CMAJ 2005; 172: 661–665.
20. Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999; 354: 1896–1900.
21. Gill JL. Evolution of statistical design and analysis of experiments. J Dairy Sci 1981; 64: 1494–1519.