Since 1952, the AVMA COE has been the US Department of Education–approved accrediting body for veterinary professional education. Furthermore, since 1973, the AVMA Educational Commission for Foreign Veterinary Medical Graduates has been the credentialing body for foreign veterinary medical graduates who wish to enter the licensure process in the United States. It is in part through these 2 processes, professional school accreditation and foreign graduate credentialing, that a set of educational standards and competencies for the veterinary medical profession have been developed and applied.
Factors that have increased the pressure for identification and use of educational outcomes assessment in veterinary medicine include a growing imperative to move closer to national and global standards for entry-level veterinary medical professional competency, partial loss of confidence in the integrity of other self-regulated professions (eg, accounting1) resulting in increased scrutiny of professional competencies, guidance from the Council for Higher Education Accreditation and the AVMA COE, and national trends in accreditation of other health professions (eg, medicine, dentistry, and nursing).
Assessment of the outcomes resulting from preclinical education is important because it often comprises more than half the time invested in the education of medical professionals. In the case of the College of Veterinary Medicine and Biomedical Sciences at Texas A&M University, preclinical education occurs over a 3- year period, in addition to baccalaureate preparation. Some elements of clinical education (eg, entry-level history taking and physical examination and rudimentary clinical reasoning) are introduced via a series of clinical correlates and other clinical experiences administered throughout the preclinical years. The belief that students will be able to draw on the biomedical science information taught during these preclinical years when they are presented with clinical problems constitutes a major reason for the willingness of medical educators to make such a heavy investment of time and resources in this effort. Although some investigators have questioned the role of knowledge of the biomedical sciences in the clinical reasoning used in routine diagnosis,2 recent studies have identified advantages of preclinical instruction in the development of diagnostic expertise. For example, Woods et al3 found that undergraduate students who received instruction concerning the mechanisms underlying a particular disease exhibited better diagnostic performance 1 week after that instruction than students in a control group. The design of that research, however, did not reveal how long this advantage was maintained. This question may have been answered by Van de Wiel et al,4 who found that knowledge of the biomedical sciences does not decay over time when it becomes encapsulated within a network of clinically relevant information through the protracted reapplication of detailed biomedical and clinical knowledge in the diagnostic process. It is not clear what happens to biomedical science information if it is not encapsulated through the process of sustained clinical experience.
Unlike their counterparts in human medicine, veterinarians are responsible for treating a wide variety of animal species. Although there are similarities across species, differences in diagnostic and therapeutic approaches exist. Many colleges of veterinary medicine have responded to the challenge of preparing their students to care for the wide variety of animal species by allowing them to choose a clinical track that concentrates their studies in selected areas. Students complete a core requirement of clinical rotations across all species and then have the opportunity, through a combination of track-specific requirements and electives, to complete additional rotations within their specific areas of interest. Although students are allowed to concentrate their studies, they are still required to sit for national and state board examinations requiring in-depth knowledge of all major species. A potential consequence of clinical tracking is differential proficiency across the species spectrum.
The objective of the study reported here was to examine the effect of various clinical tracks within the veterinary medical clinical curriculum on clinical diagnostic proficiency as determined by assessment of pre- and postclinical training. Specifically, we anticipated that students would have greater diagnostic efficiency and improvement in diagnostic proficiency in the fields in which they had tracked during their clinical training. Improvements in diagnostic proficiency were perceived to reflect advancement attributed to experiential learning in the clinical setting.
Materials and Methods
Thirty-two veterinary students whose performance in the veterinary curriculum was characterized by a range of GPAs were selected from the incoming fourth- year veterinary school class. Of the 32 students in this study, 22 were in the small animal clinical track, 2 were in the alternative track (which includes a broad range of career alternatives ranging from exotic animals to laboratory animal medicine to careers in public health), 5 were in the mixed track (small animal, equine, and food animal), and 3 were in the large animal track (food animal and equine). This distribution was representative of the overall distribution of students across tracks in the larger class. The study was designed to assess improvement in students' clinical reasoning when they were presented with familiar and unfamiliar clinical problems. The selection of veterinary students from the 4 clinical tracks of the curriculum allowed these differences to be examined because these students were exposed to various clinical experiences during their fourth year.
A series of cases (equine [n = 2], bovine [1], and small animal [2]) written by clinical specialists was used to assess clinical competencies. Participants completed 2 examinations: one early in the fourth (clinical) year of the program and the other at the end of the fourth year. Both examinations consisted of 6 clinical cases (3 small animal, 2 equine, and 1 food animal). The second examination was exactly the same as the first in format but contained 2 new equine cases, a new food animal case, and 2 new small animal cases. Each student was asked to identify problems and differential diagnoses, select diagnostics, and recommend treatment for each of these cases. Three hours were allowed for completion of each examination, and the examinations were scheduled when students did not have other professional curriculum obligations. The examinations were graded by a veterinary resident using a performance rubric developed by the authors of the clinical cases.
All student participants were informed of the study design and completed an Institutional Review Board– approved consent form. Each student who completed the pretest was paid $50. At the end of the fourth year, each student in the original group of 32 was asked to complete the posttest. Students were paid $75 for completion of this examination. There were additional monetary rewards for performance on the posttest (eg, highest score and most improved score) to encourage students to put their best effort into the activity. There was no expectation that students would study or do extra preparation to take either examination. Our expectation was that students would do their best but that the outcome would in no way affect student standing in or progression through the veterinary curriculum. Identity of all study participants was confidential.
Data analysis—Mean test scores for pre- and postclinical training assessments were calculated by track for small animal tests, large animal tests, and combined scores. Pre- and postclinical training assessment scores were compared by use of a Wilcoxin signed rank test for matched pairs. For the analysis of the effect of clinical tracking on student performance, results from students in large animal, mixed animal, and alternative tracks were combined because of the low sample numbers. The effect of clinical track (small animal vs all others) on postclinical training assessment score and the difference between pre- and postclinical training assessment scores were assessed by use of a Mann-Whitney test for test scores on small animal cases and large animal cases and the combined score for small and large animal cases. Potential influences of other studentlevel characteristics (eg, undergraduate GPA and GRE score) were evaluated by use of Spearman rank correlation coefficients. Statistical significance was defined as P < 0.05. All statistical analyses were performed by use of commercially available software.a
Results
Significant differences were detected between the total scores for pre- and postclinical assessments (P < 0.001) and for the small animal component of the assessment (P < 0.001) but not for the large animal component of the assessment (P = 0.41; Table 1). Student performance for all tracks (small animal, mixed animal, large animal, and alternative) improved numerically between pre- and postassessment scores, with the exception of small animal track students on large animal components of the assessment (Figure 1). Large animal posttest scores were significantly lower for small animal track students, compared with scores for all other tracks (P = 0.02; Figure 2). Significant correlations were detected between GRE test scores and small animal posttest results. Spearman rank correlation coefficients associated with the small animal posttest score were 0.45, 0.43, and 0.44 for the total combined GRE, GRE verbal, and quantitative sections, respectively. Similar associations were detected among total combined GRE scores (ρ = 0.38), GRE verbal (ρ = 0.40), and GRE quantitative (ρ = 0.40) and total assessment score, including both small and large animal sections. Final GPA was significantly correlated with total (ρ = 0.47) and small animal assessment scores (ρ = 0.45).
Mean, SD, and median clinical assessment scores prior to (Pre) and after (Post) the clinical training period in 32 fourth—year veterinary students, by clinical track.
Type of assessment | Small animal track | Mixed animal track | Large animal track | Alternative track | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Mean | SD | Median | Mean | SD | Median | Mean | SD | Median | Mean | SD | Median | |
Small animal | ||||||||||||
Pre | 42.1 | 11.1 | 40.0 | 42.8 | 12.4 | 41.0 | 39.7 | 13.1 | 44.0 | 41.0 | 19.8 | 41.0 |
Post | 69.1 | 15.1 | 66.1 | 70.9 | 22.7 | 79.7 | 60.0 | 11.0 | 61.4 | 75.1 | 11.2 | 75.1 |
Large animal | ||||||||||||
Pre | 36.4 | 11.2 | 35.8 | 35.4 | 14.9 | 34.7 | 25.7 | 10.4 | 28.2 | 40.0 | 3.5 | 40.1 |
Post | 31.0 | 7.0 | 32.5 | 36.8 | 5.4 | 37.0 | 37.3 | 8.1 | 42.0 | 39.0 | 5.7 | 39.0 |
Total | ||||||||||||
Pre | 78.5 | 16.8 | 79.2 | 78.2 | 27.1 | 75.7 | 65.3 | 23.5 | 72.2 | 81.0 | 16.3 | 81.1 |
Post | 100.1 | 18.0 | 97.7 | 107.7 | 26.6 | 117.8 | 97.4 | 12.9 | 90.4 | 114.1 | 16.8 | 114.1 |
Discussion
In this study, we attempted to identify differences in clinical reasoning proficiency that develop as a result of student training in various clinical tracks. Because of the small sample size in some of the clinical tracks, we elected to combine the non–small animal tracks for analysis. This limits the comparisons that can be made and the conclusions that can be drawn regarding the influence of clinical tracking on the development of clinical proficiency. The small sample size also limited the statistical tests that could be used and the power available to detect statistical differences. The results reflected nonparametric statistical tests used because of the nonnormally distributed residuals associated with test score data. The results of this study supported our hypothesis that differences in clinical experiences be- tween the small animal track and all other track opportunities (large animal, mixed animal, and alternative) influence the development of clinical proficiency in fourth-year veterinary students during their clinical training period. Generally, students do improve on the basis of the scoring system designed for this assessment instrument, and their improvement in small animal assessments is greater than observed for large animal cases. This is particularly evident in small animal tracking students who had a decrease in test score for large animal assessments after completing their clinical training period. Whether this decline in large animal competencies was caused by a generalized inability of the students to draw upon their preclinical biomedical science knowledge; a lack of motivation to expend the mental effort required to draw from their prior knowledge; or a deficit in their retained preclinical biomedical science information, specifically in regard to large animal medicine and surgery, could not be determined.
We suggest that the differential improvement in clinical reasoning scores on the small animal versus the large animal component of the examination reflects the greater proportion of time students spend in small animal rotations. This occurs regardless of the student's clinical track because of the preponderance of core rotations within the small animal clinic. This is a common situation in veterinary colleges during the clinical training period. Additionally, more of the preclinical biomedical sciences training is devoted to small animal courses or is presented largely in small animal context. This is likely a result of a shift toward small animal medicine and surgery for a larger proportion of veterinary students, compared with historical distributions.
Inferences regarding the learning methods associated with these differences are difficult to make. These could reflect enhanced preclinical biomedical science instruction in the small animal clinical sciences that confers greater ability to develop expert-like problem-solving processes. They may also reflect a greater number of cases encountered during the clinical training in the small animal rotations and, therefore, an expanded set of cases from which to practice pattern recognition. Our results suggest that small animal tracking students do not receive enough clinical experience in large animal medicine and surgery to allow encapsulation; thus, this unincorporated knowledge may become lost because of memory decay.5
Limited correlations existed between traditional predictors of student competency (eg, grades and standardized test scores) and performance on clinical reasoning assessments. Generally, GRE scores, including the total score and scores on the verbal and analytical subsections, were correlated with performance on the small animal assessments. Final GPA was similarly correlated with these assessment scores. The limited improvement in large animal assessment proficiency may have obscured more subtle correlations with the predictors in this study. However, medical experts draw on both knowledge of the biomedical sciences and clinical experience when presented with an unfamiliar clinical problem. Because encapsulation of biomedical science knowledge occurs after students have gained sufficient clinical experience to allow them to understand the interrelationship between their knowledge and their experience, our finding of limited correlations with these predictors should not be surprising. Our observations suggest that students rely heavily on prior knowledge of clinical patterns when confronted with clinical problem-solving opportunities rather than on preclinical knowledge of the biomedical sciences. Future analyses on a larger cohort would ideally include assessment of correlations between performance on clinical reasoning examinations and postgraduation measures of performance, such as score on licensure examinations and employment success.
One difficulty encountered in this study was the lack of clear standards for evaluating clinical reasoning. Even though clinical reasoning is a goal of paramount importance in medical education, the development of assessments that reliably measure this trait has proven difficult. Issues concerning content specificity, the heuristic nature of expert problem-solving strategies, and the tendency of some evaluation instruments to rate novice performance superior to expert performance have impeded progress in the development of successful clinical reasoning assessment strategies.6 Although numerous adaptations have been attempted, to date, no reliable measure of clinical reasoning has been validated. Because there is substantial variety across schools and countries as to the quality and extent of student clinical reasoning, there is a critical need for tools that can be used to evaluate this skill.
ABBREVIATIONS
COE | Council on Education |
GPA | Grade point average |
GRE | Graduate Record Examination |
Intercooled Stata, version 9.2 for Windows, Stata Corp, College Station, Tex.
References
- 1.↑
Chaney PK, Philipich KL. Shredded reputation: the cost of audit failure. J Account Res 2002;40:1221–1245.
- 2.↑
Patel VL, Groen GJ, Scott HM. Biomedical knowledge in explanations of clinical problems by medical students. Med Educ 1988;22:398–406.
- 3.↑
Woods NN, Brooks LR, Norman GR. The value of basic science in clinical diagnosis: creating coherence among signs and symptoms. Med Educ 2005;39:107–112.
- 4.↑
Van de Wiel MWJ, Boshuizen HPA & Schmidt HG, et al. The explanation of clinical concepts by expert physicians, clerks, and advanced students. Teach Learn Med 1999;11:153–163.
- 6.↑
Schuwirth LWT, van der Vleuten CPM. The use of clinical simulations in assessment. Med Educ 2003;37 (suppl 1):61–67.