• View in gallery

    Results of path analysis of the relationships among various measures of student performance and scores for the NAVLE. The best-fit model (solid lines only) is superimposed on the hypothetical model (dotted and solid lines). Path coefficients are grouped with coefficients for students from ISU listed above coefficients for students from UMN, with coefficients for students who graduated in 2009 listed first and coefficients for students who graduated in 2010 in parentheses. *Path coefficient was significantly (P < 0.05) different from 0. †Path coefficient was significantly (P < 0.01) different from 0. ‡Path coefficient was significantly (P < 0.001) different from 0.

  • 1.

    Kuncel NRWee SSerafin L, et al. The validity of the Graduate Record Examination for master's and doctoral programs: a meta-analytic investigation. Educ Psychol Meas 2010; 70: 340352.

    • Search Google Scholar
    • Export Citation
  • 2.

    Confer AW. Preadmission GRE scores and GPAs as predictors of academic performance in a college of veterinary medicine. J Vet Med Educ 1990; 17: 5662.

    • Search Google Scholar
    • Export Citation
  • 3.

    Confer AWLorenz MD. Pre-professional institutional influence on predictors of first-year academic performance in veterinary college. J Vet Med Educ 1999; 26: 1620.

    • Search Google Scholar
    • Export Citation
  • 4.

    Collin VTViolato CHecker K. Aptitude, achievement and competence in medicine: a latent variable path model. Adv Health Sci Educ Theory Pract 2009; 14: 355366.

    • Search Google Scholar
    • Export Citation
  • 5.

    Ferguson EJames DMadeley L. Factors associated with success in medical school: systematic review of the literature. BMJ 2002; 324: 952957.

    • Search Google Scholar
    • Export Citation
  • 6.

    White CBDey ELFantone JC. Analysis of factors that predict clinical performance in medical school. Adv Health Sci Educ Theory Pract 2009; 14: 455464.

    • Search Google Scholar
    • Export Citation
  • 7.

    Wilkinson DZhang JByrne GJ, et al. Medical school selection criteria and the prediction of academic performance. Med J Aust 2008; 188: 349354.

    • Search Google Scholar
    • Export Citation
  • 8.

    Evans PWen FK. Does the Medical College Admission Test predict global academic performance in osteopathic medical school? J Am Osteopath Assoc 2007; 107: 157162.

    • Search Google Scholar
    • Export Citation
  • 9.

    Hamdy HPrasad KAnderson MB, et al. BEME systematic review: predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach 2006; 28: 103116.

    • Search Google Scholar
    • Export Citation
  • 10.

    Pretz JENaples AJSternberg RJ. Recognizing, defining, and representing problems. In: Davidson JESternberg RJ, eds. The psychology of problem solving. Cambridge, England: Cambridge University Press, 2003; 330.

    • Search Google Scholar
    • Export Citation
  • 11.

    Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med 1994; 69: 883885.

  • 12.

    Chang RWBordage GConnell KJ. The importance of early problem representation during case presentations. Acad Med 1998; 73: S109S111.

  • 13.

    Mertler CVannatta R. Advanced and multivariate statistical methods: practical application and interpretation. 3rd ed. Glendale, Calif: Pyrczak Publishing, 2005.

    • Search Google Scholar
    • Export Citation
  • 14.

    Byrne BM. Structural equation modeling with LISREL, PRELIS, and SIMPLIS: basic concepts, application, and programming. Mahwah, NJ: Erlbaum, 1998.

    • Search Google Scholar
    • Export Citation
  • 15.

    Yates JJames D. Predicting the “strugglers”: a case-control study of students at Nottingham University Medical School. BMJ 2006; 332: 10091013.

    • Search Google Scholar
    • Export Citation
  • 16.

    Satorra ABentler R Scaling corrections for chi-square statistics in covariance structure analysis, in Proceedings. Bus Econ Statist Sect Am Stat Assoc 1988;308313.

    • Search Google Scholar
    • Export Citation

Advertisement

Relationships among common measures of student performance and scores on the North American Veterinary Licensing Examination

View More View Less
  • 1 Department of Veterinary Pathology, College of Veterinary Medicine, Iowa State University, Ames, IA 50011.
  • | 2 The Office of Curricular and Student Assessment, College of Veterinary Medicine, Iowa State University, Ames, IA 50011.
  • | 3 The Office of Curricular and Student Assessment, College of Veterinary Medicine, Iowa State University, Ames, IA 50011.
  • | 4 Department of Veterinary Clinical Sciences, College of Veterinary Medicine, University of Minnesota, Saint Paul, MN 55108.
  • | 5 The Office of Curricular and Student Assessment, College of Veterinary Medicine, Iowa State University, Ames, IA 50011.

Abstract

Objective—To determine relationships among several common measures of performance prior to and during veterinary school (ie, Graduate Record Examination [GRE] scores, undergraduate grade point average [UGPA], Qualifying Examination [QE] scores, overall grade point average during veterinary school [VGPA], and scores for the North American Veterinary Licensing Examination [NAVLE]).

Design—Longitudinal retrospective study.

Sample Population—192 students from the Iowa State University College of Veterinary Medicine and 152 students from the University of Minnesota College of Veterinary Medicine.

Procedures—Student UGPA, VGPA, and GRE score data were gathered during the normal admissions and academic processes. The QE was administered as a low-stakes examination at both institutions for the purposes of curricular assessment. Scores on the NAVLE were provided with student permission by the National Board of Veterinary Medical Examiners. Path analysis was used to explore hypothesized relationships among variables.

Results—GRE scores and UGPA predicted NAVLE scores indirectly through QE scores and VGPA, whereas QE scores and VGPA directly predicted NAVLE scores. The resulting models explained 58% to 62% of the variance in NAVLE scores, with QE score being the strongest predictor.

Conclusions and Clinical Relevance—Results indicated that for veterinary school students, GRE scores, UGPA, VGPA, and QE scores could be used to predict scores on the NAVLE. This suggests that these measures could prove useful to veterinary schools when admitting students or preparing them for the NAVLE.

The primary goal of any veterinary educational program is to produce graduates who become highly competent, successful, and satisfied professionals. To meet that end, veterinary schools seek to design their admissions and instructional practices in ways that will help graduates obtain licensure and succeed in their subsequent careers. To design effective admissions and teaching processes, it is useful for colleges to understand how measures of knowledge or aptitude that are used for admissions or during veterinary school relate to subsequent success or failure. Thus, the present study was designed to assess relationships among several common measures of performance prior to and during veterinary school and scores on the NAVLE. The NAVLE is an electronically administered, 360-item, multiple-choice examination that is required for veterinary licensure in the United States and Canada. Veterinary schools have great interest in NAVLE scores because the NAVLE score directly determines a graduate's opportunity to gain licensure as a veterinary medical practitioner and because the NAVLE is the only currently available standardized measure of clinical ability used by all veterinary colleges in the United States and Canada. Measures of student performance evaluated in the present study consisted of GRE scores, UGPA, QE scores, and VGPA.

The National Board of Veterinary Medical Examiners, which administers the NAVLE, did not begin making raw NAVLE scores available to accredited veterinary schools until 2009. As a result, reviews of medical and general education scholarly indexes revealed no published studies exploring the relationship between NAVLE scores and other measures of performance prior to or during participation in the veterinary curriculum. Therefore, we reviewed literature from veterinary medical education and closely related areas to form hypotheses regarding relationships among these predictors and NAVLE scores.

The GRE measures knowledge and cognitive skills thought to predict aptitude for graduate study, and GRE score is a significant predictor of performance in master's and doctoral degree programs across a variety of disciplines.1 The 2009 comparative data report, shared among AVMA-accredited colleges of veterinary medicine, revealed that of the 28 US veterinary schools and colleges, all but 3 reported GRE scores (verbal, quantitative, and analytical), while none of the 5 Canadian schools reported GRE scores. Only 1 US veterinary college reported the biology subject test score. A search of the literature revealed few references to the GRE's ability to predict performance in medical science programs, primarily because human medical schools use instruments such as the Medical College Admission Test or the Graduate Australian Medical School Admissions Test, which are specifically designed to predict performance in medical school. Two studies2,3 exploring the predictive power of GRE scores in veterinary medicine provided mixed results, with overall GRE score and scores on the various GRE subsections predicting VGPA to a greater or lesser degree. In a study2 involving 8 academic classes at Oklahoma State University, Confer found that scores for the GRE subsections (analytical, quantitative, or verbal) significantly predicted VGPA in most years. However, no particular subsection was a significant predictor across all years. In a subsequent study,3 Confer and Lorenz found that scores for the GRE biology subject test were a significant predictor of VGPA. Because GRE scores generally are positively associated with performance in graduate programs and are predictive of VGPA, we hypothesized that GRE scores would be predictive of all subsequent measures explored in our study, including QE scores, VGPA, and NAVLE scores.

Undergraduate grade point averages are broadly available to schools and colleges of veterinary medicine and have been shown to predict grade point average in veterinary school2,3 and similar settings such as colleges of medicine4–7 and colleges of osteopathic medicine.8 In human medical education, UGPA has also been shown to predict performance on licensing examinations and postgraduate clinical competency.4,5 Therefore, we expected that UGPA would be positively associated with subsequently measured variables, including VGPA, QE scores, and NAVLE scores.

Grades during medical school have been shown to predict subsequent clinical performance, as measured by means of clinical encounters (within the curriculum and during residency), and licensing examination scores.6,9 As a result, we hypothesized that VGPA would positively predict performance on the NAVLE, as measured by NAVLE score.

The QE is a 300-item, multiple-choice examination that is designed to assess basic proficiency in the areas of veterinary anatomy, physiology, pharmacology, microbiology, and pathology. It was created by the National Board of Veterinary Medical Examiners, the same organization that administers the NAVLE, to be administered to graduates of programs not accredited by the AVMA, as part of the PAVE. The PAVE is accepted by 31 states and territories in the United States as evidence that the educational component of licensure requirements has been met, and PAVE participants have to pass the QE before they are allowed to proceed to the assessment of clinical skills proficiency. As is standard for commercially available standardized examinations, the QE has a number of forms, with different forms used each time the examination is administered. Content experts are employed to create items, and new items are reviewed prior to their incorporation in the examination. Care is taken to ensure that the psychometric properties of the test are equivalent from form to form and year to year.

A search of medical and general education research indexes revealed no studies exploring the relationship between QE scores and other predictors of academic, clinical, or professional performance. However, there are 2 reasons to believe that QE scores would be a good predictor of NAVLE scores. First, for various knowledge domains, underlying domain knowledge has been shown to be characteristic of expertise.10 In the medical field, for instance, effective diagnosticians have been shown to have more elaborated underlying knowledge structures11 and more thorough knowledge representations12 than do their less effective counterparts. If basic science knowledge is essential to practicing medicine effectively and the QE measures basic science knowledge while the NAVLE measures clinical readiness, then QE scores should predict NAVLE scores. Second, the QE is similar to an examination used to measure basic science knowledge in human medical education that has been shown to predict subsequent clinical knowledge. That examination, step 1 of the US Medical Licensing Examination, assesses whether students understand and can apply important concepts of the sciences fundamental to the practice of medicine. The step 1 examination has been shown to predict performance on subsequent measures of clinical performance or clinical readiness, including clerkship performance6 and clinically oriented licensing examinations.9 Because the QE is similar to step 1 of the US Medical Licensing Examination in purpose and because the NAVLE is intended to measure clinical readiness, we hypothesized that QE scores would be predictive of NAVLE scores.

Overall, therefore, we hypothesized that GRE scores would directly predict QE scores, VGPA, and NAVLE scores and that GRE scores would also indirectly predict NAVLE scores through QE scores and VGPA. Similarly, we hypothesized that UGPA would directly predict QE scores, VGPA, and NAVLE scores and would also indirectly predict NAVLE scores through QE scores and VGPA. We also hypothesized that QE scores and VGPA would directly predict NAVLE scores. Because QE scores and VGPA reflect ability and knowledge specific to veterinary medicine, we hypothesized that they would be stronger predictors of NAVLE scores than UGPA or GRE scores.

Many studies exploring relationships among common predictors of academic performance and subsequent outcomes use multiple regression analysis. For the present study, we decided instead to use path analysis because it provides several advantages over multiple regression analysis.13 First, path analysis can explain the relationships between a set of independent variables (eg, GRE scores or UGPA) and a set of dependent variables (eg, QE scores, VGPA, or NAVLE scores). The path coefficient is similar in meaning to the standardized regression coefficient in a multiple regression analysis and represents the extent to which 1 unit of change in the independent variable would predict the level of change in the dependent variable, while controlling for other predictors. Second, path analysis can examine the indirect effect among variables, which is not possible with multiple regression analysis.13 An indirect effect means that the effect of a predictor on the dependent variable is mediated through the predictor's effect on another variable. For example, in this study, we investigated whether GRE scores and UGPA indirectly affected NAVLE scores through their effects on QE scores and VGPA. Third, path analysis can examine whether hypothesized theoretical models fit the data. In summary, path analysis provides a more complete understanding of the relationships among various variables than does multiple regression analysis.

Materials and Methods

Participants—The research protocol was reviewed and approved by the institutional review boards of ISU and UMN. Performance data were obtained for all students graduating in 2009 and 2010 from ISU and UMN who consented to have their NAVLE scores released to the veterinary college. This included 192 students from ISU and 152 students from UMN. Survey data regarding the QE were obtained from all ISU graduates of 2009 and 2010 who elected to respond to the anonymous post-QE survey (197 students; 84% response rate) and from all UMN graduates of 2009 who elected to respond to the post-QE survey (85 students; 97% response rate).

Procedures—Undergraduate grade point averages and GRE scores were obtained through the normal admissions processes at each school. For ISU, overall UGPA was used. Because UMN does not request overall UGPA, the grade point average for courses required as a prerequisite for admission (representing 57 to 91 credit hours) was used. Because the biology subject test was not used by ISU or UMN, overall GRE score was used.

With each student's permission, NAVLE scores were obtained from the National Board of Veterinary Medical Examiners. For students who took the NAVLE more than once, the score for the first time the examination was taken was used. All participating veterinary students took the QE during January (ie, the spring semester) of their third year, at which point the students had completed their basic science courses and were midway through their didactic clinical instruction. Although all students were required to take the QE, their examination results did not contribute to their grades and none of the students were held back because they did not pass the examination. Students were encouraged to not study for the examination because our intent was to measure how much basic science knowledge students had obtained through their normal activities as students and not to measure what they could learn during intensive pre-examination studying. However, students were asked to take the examination seriously and were told that their QE scores would likely help them determine their readiness for the NAVLE and identify any areas of weakness. The QE was offered in proctored settings consistent with the guidelines established by the National Board of Veterinary Medical Examiners. To help determine the extent to which students took the examination seriously and considered it to be aligned with the curriculum in which they were participating, students completed a brief survey following test administration.

Data analysis—To calculate GRE scores, we standardized the scores for the 3 subsections (verbal, quantitative, and analytical writing) and summed the standardized subsection scores for each participant. We did this because the 3 sections use different scales, with scores for the verbal and quantitative sections ranging from 0 to 800 and scores for the analytical section ranging from 0 to 6.

Path analysis—To investigate the fit of the hypothesized model, we conducted path analysis with the maximum likelihood method, as described14; standard softwarea was used. Model fit was estimated by use of 3 indexes: the comparative fit index, the root-mean-square error approximation, and the standardized root-mean-square residual. As recommended,15 a comparative fit index ≥ 0.95, a root-mean-square error approximation ≤ 0.06, and a standardized root-mean-square residual ≤ 0.08 were used to indicate that the model provided an appropriate fit to the data. Data for students who graduated from ISU in 2009 were used to evaluate the hypothesized model, and data for students who graduated from ISU in 2010 and for students who graduated from UMN in 2009 or 2010 were used to cross-validate the findings.

Correlations among measures of student performance were assessed by use of the Pearson correlation method. Standard softwareb was used.

Results

Measures of student performance and NAVLE scores for the 4 classes of students included in the study were summarized (Table 1). Examination of correlation coefficients revealed that pre-NAVLE performance measures were significantly correlated with NAVLE scores in all but 3 instances (UGPA in 2009 at ISU, GRE in 2010 at UMN, and UGPA in 2009 at UMN; Table 2).

Table 1—

Summary of measures of student performance and NAVLE scores for 4 classes of veterinary college students.

VariableISU 2009ISU 2010UMN 2009UMN 2010
GRE score0 ± 2.140 ± 2.060 ± 2.030 ± 2.17
 (−4.31 to 5.29)(−5.43 to 3.87)(−6.24 to 4.16)(−4.73 to 5.32)
UGPA3.57 ± 0.273.58 ± 0.273.60 ± 0.233.55 ± 0.27
 (2.90 to 4.00)(2.81 to 4.00)(2.99 to 4.00)(2.98 to 4.00)
QE score229.18 ± 38.10233.08 ±43.17216.11 ± 38.26220.29 ± 44.00
 (143 to 330)(140 to 339)(127 to 306)(126 to 354)
VGPA3.21 ± 0.323.29 ± 0.323.33 ± 0.363.30 ± 0.45
 (2.37 to 3.94)(2.40 to 3.89)(2.49 to 3.98)(2.28 to 4.00)
NAVLE score524.68 ± 65.85534.11 ± 62.32530.65 ± 66.08543.16 ± 62.42
 (394 to 734)(379 to 707)(386 to 662)(403 to 720)

Data are given as mean ± SD (range). The minimum UGPA for admission to ISU was 2.5. Iowa State University did not specify a minimum GRE score for admission but did consider GRE scores in admissions decisions, giving preference to higher scores. University of Minnesota did not specify a minimum UGPA or GRE score, but admissions requirements at UMN made it unlikely that students would be admitted with a UGPA < 2.75 or GRE score < the 35th percentile. ISU 2009= Students who graduated from the ISU College of Veterinary Medicine in 2009 (n = 96). ISU 2010 = Students who graduated from the ISU College of Veterinary Medicine in 2010 (n = 96). UMN 2009 = Students who graduated from the UMN College of Veterinary Medicine in 2009 (n = 82). UMN 2010 = Students who graduated from the UMN College of Veterinary Medicine in 2010 (n = 70).

Table 2—

Correlations among measures of student performance and NAVLE scores for students in Table 1.

VariableGRE scoreUGPAQE scoreVGPANAVLE score
GRE scoreNA0.17(0.01)0.43* (0.38*)0.22 (0.14)0.23 (0.30)
UGPA−0.06 (−0.03)NA0.43* (0.16)0.47* (0.31)0.37* (0.16)
QE score0.38* (0.08)0.19(0.40*)NA0.60* (0.68*)0.74* (0.79*)
VGPA0.36* (0.12)0.22 (0.46*)0.67* (0.53*)NA0.61*(0.69*)
NAVLE score0.43* (0.20)0.17(0.41*)0.75* (0.78*)0.69* (0.61*)NA

Data are given as zero-order correlation coefficients. Coefficients above the diagonal represent data for students from ISU, and coefficients below the diagonal represent data for students from UMN. Within each pair, the first coefficient represents data for students who graduated in 2009 and the coefficient in parentheses represents data for students who graduated in 2010.

Path coefficient was significantly (P < 0.05) different from 0.

Path coefficient was significantly (P < 0.01) different from 0.

Path coefficient was significantly (P < 0.001) different from 0.

Student impressions of the QE—Most students found the QE to be of average difficulty or harder (Table 3). In addition, most students, as requested, indicated that they did not study for the QE, and those who did study did not study for > 4 hours. Overall, 69.1% of respondents indicated that they took the QE at least as seriously as a course quiz or low-point course assignment (ie, a score ≥ 4 on a scale from 1 to 7), and only 3.9% reported that they did not take the QE seriously at all (ie, a score of 1). Finally, 91.1% of the respondents indicated that about 60% or more of the material covered by the examination had been discussed in at least 1 course at the veterinary college.

Table 3—

Results of a survey of student impressions of the QE administered to students in Table 1.

  Percentage of students
Survey questionResponseISU 2009ISU 2010UMN 2009Overall
How difficult was the QE?1 (extremely easy)0010.3
 22301.3
 32211.6
 4 (average difficulty)8261315.1
 530243229.2
 648324542.3
 7 (extremely difficult)913810.2
Did you study for the QE?1 (not at all)94999796.7
 2 (1–2 h)4122.3
 3 (3–4 h)2011
 4 (5–6 h)0000
 5 (7–8 h)0000
 6 (9–10 h)0000
 7 (> 10 h)0000
How seriously did you take the QE?1 (not at all)3273.9
 211276.8
 319222020.3
 4 (like a course quiz)28332528.4
 519232021
 612161514.2
 7 (like a final exam)9265.5
About what percentage of the material on the QE had been discussed in at least 1 course during veterinary college?1 (none)0000
 2 (about 10%)0000
 3 (about 20%)1100.7
 4 (about 30%)0010.3
 5 (about 40%)2242.6
 6 (about 50%)5565.3
 7 (about 60%)1171410.9
 8 (about 70%)26213126.4
 9 (about 80%)28302728.4
 10 (about 90%)27291623.4
 11 (about 100%)0512

ISU 2009 = Students who graduated from the ISU College of Veterinary Medicine in 2009 (n = 98). ISU 2010 = Students who graduated from the ISU College of Veterinary Medicine in 2010 (n = 99). UMN 2009 = Students who graduated from the UMN College of Veterinary Medicine in 2009 (n = 85)

Path analysis of the hypothesized model—Before formal path analysis of the hypothesized model was performed, we investigated whether data for ISU students who graduated in 2009 fit the multivariate normality assumption, a necessary requirement for the maximum likelihood method. Results of the multivariate normality test indicated that the data were normally distributed (P = 0.10).

Results of the path analysis for the hypothesized model indicated that the model was saturated and had a perfect fit for the data (P = 1.00; comparative fit index = 1.00; root-mean-square error approximation = 0.00; standardized root-mean-square residual = 0.00). Three of the paths in the hypothesized model were not significant, including the paths from GRE scores to VGPA, the path from GRE scores to NAVLE scores, and the path from UGPA to NAVLE scores (Figure 1). Therefore, we examined an alternative model that excluded these 3 paths from the hypothesized model.

Figure 1—
Figure 1—

Results of path analysis of the relationships among various measures of student performance and scores for the NAVLE. The best-fit model (solid lines only) is superimposed on the hypothetical model (dotted and solid lines). Path coefficients are grouped with coefficients for students from ISU listed above coefficients for students from UMN, with coefficients for students who graduated in 2009 listed first and coefficients for students who graduated in 2010 in parentheses. *Path coefficient was significantly (P < 0.05) different from 0. †Path coefficient was significantly (P < 0.01) different from 0. ‡Path coefficient was significantly (P < 0.001) different from 0.

Citation: Journal of the American Veterinary Medical Association 238, 4; 10.2460/javma.238.4.454

The alternative model also provided an appropriate fit for the data (P = 0.28; comparative fit index = 1.00; root-mean-square error approximation = 0.06 [90% confidence interval, 0.00 to 0.19]; standardized root-mean-square residual = 0.04). When the alternative and initial hypothesized models were compared, the difference in χ2 values was not significantly (P > 0.05) different from 0, suggesting that the 3 excluded paths did not significantly contribute to the hypothesized model. Therefore, on the basis of the parsimony principle, the alternative model was selected as the best-fit model for the data.

For students who graduated from ISU in 2009, all path coefficients were moderate in magnitude (Figure 1). The coefficients for the paths from GRE scores to QE scores and from UGPA to QE scores were 0.31 and 0.38, respectively, indicating that students who had high GRE scores or high UGPAs tended to have high QE scores, whereas students who had low GRE scores or low UGPAs tended to have low QE scores. The coefficient for the path from UGPA to VGPA was 0.47, suggesting that students who had a high UGPA also tended to have a high VGPA and students who had a low UGPA tended to have a low VGPA. The coefficients for the paths from QE scores to NAVLE scores and from VGPA to NAVLE scores were 0.58 and 0.27, respectively, indicating that students who had a high QE score or high VGPA tended to have a high NAVLE score. Overall, 28% of the variation in QE score was explained by variations in GRE score and UGPA, 22% of the variation in VGPA was explained by variations in GRE score and UGPA, and 58% of the variation in NAVLE score was explained by variations in GRE score, UGPA, VGPA, and QE score combined.

Cross-validation of the path analysis model—To cross-validate the findings of the best-fit model for data from ISU students who graduated in 2009, we conducted a multiple-group analysis to test whether path coefficients for these students were equivalent to coefficients for ISU students who graduated in 2010 and for UMN students who graduated in 2009 or 2010. Before the multiple-group analysis, we examined whether data for these 3 additional samples were normally distributed. Results of the multivariate normality test indicated that data for UMN students who graduated in 2009 (P = 0.43) and for UMN students who graduated in 2010 (P = 0.36) were normally distributed but that data for ISU students who graduated in 2010 were not (P = 0.01). Therefore, scaled χ2 values for the model fit were calculated to adjust for the influence of non-normality.16

The multiple-group analysis involved comparing χ2 values between a freely estimated model and a constrained model.14 For the freely estimated model, the path coefficients were allowed to be varied and freely estimated across the 4 samples. In contrast, for the constrained model, the path coefficients were constrained (ie, forced to be identical) for the 4 samples. Both the freely estimated model (P = 0.01; comparative fit index = 0.98; root-mean-square error approximation = 0.12 [90% confidence interval, 0.05 to 0.18]; standardized root-mean-square residual = 0.06) and the constrained model (P = 0.06; comparative fit index = 0.98; root-mean-square error approximation = 0.07 [90% confidence interval, 0.0 to 0.12]; standardized root-mean-square residual = 0.09) provided an appropriate fit for the data. The difference in χ2 values between the freely estimated model and the constrained model was not significantly (P = 0.56) different from 0, suggesting that the path coefficients for the best-fit model for the data from ISU students who graduated in 2009 did not differ significantly from the path coefficients for the other 3 samples.

For all 4 samples, path coefficients were moderate, with the exception of the coefficient for the path from GRE score to QE score for UMN students who graduated in 2010 and the coefficient for the path from UGPA to QE score for ISU students who graduated in 2010. However, these 2 path coefficients did not differ significantly from path coefficients for the other 3 samples. Across the 4 samples, about 7% (UMN students who graduated in 2009) to 28% (ISU students who graduated in 2009) of the variation in QE score was explained by variations in GRE score and UGPA, about 5% (UMN students who graduated in 2009) to 22% (ISU students who graduated in 2009) of the variation in VGPA was explained by variations in UGPA, and about 58% (ISU students who graduated in 2009) to 66% (UMN students who graduated in 2010) of the variation in NAVLE score was explained by variations in GRE score, UGPA, QE score, and VGPA combined.

Discussion

Results of the present study indicated that for veterinary school students, GRE scores, UGPA, VGPA, and QE scores could be used to predict NAVLE scores. This suggests that these measures could prove useful to veterinary schools when admitting students or preparing them for the NAVLE.

For the present study, students were encouraged to not study for the QE and were told that results for the QE would not contribute to their grades. We hoped this approach would reduce the effect of intensive pre-examination study on QE scores but knew that we ran the risk that students would put minimal effort into the examination. According to results of the survey students completed at the time of the QE, 69.1% indicated that they took the QE at least as seriously as a course quiz or low-point course assignment, and only 3.9% reported that they did not take the QE seriously at all. In addition, results of the path analysis supported the idea that students did more than guess on the QE. If students had used minimal effort on the QE, guessing or choosing random answers for many items, the outcome would have been only minimally related to their actual knowledge or ability, and if that had been the case, we would have expected that QE scores would not be predictive of other measures of student performance, such as VGPA and NAVLE score.

In the present study, with all variables accounted for, GRE scores did significantly and directly predict QE scores but did not significantly predict VGPA or NAVLE scores directly. However, GRE scores did indirectly predict NAVLE scores by way of QE scores. In other words, students who scored high on both the GRE and QE were likely to score high on the NAVLE, and students who scored low on both the GRE and QE were likely to score low on the NAVLE. Although the path coefficients for the best-fit model for the data from ISU students who graduated in 2009 did not differ significantly from the path coefficients for the other 3 samples, the relationship between GRE score and QE score was not significant for UMN students who graduated in 2010. This could have been due to the relatively small group size (n = 70) or could have been due to the fact that the magnitude of this relationship, although significant for 3 of the 4 samples, was not particularly large. In any case, the magnitude of the relationship did not differ significantly across samples, suggesting overall that the relationship was modest but meaningful.

Grades in an academic program can indicate a student's ability or knowledge but also can relate to attendance, ability to ascertain and adapt to instructors' expectations, mental stamina, discipline, and test-taking ability. For the present study, we found, as we had hypothesized, that UGPA directly predicted VGPA. It also directly predicted QE scores, although for 1 of the 4 samples (ISU students who graduated in 2010), the coefficient was not significant. However, the path coefficients did not differ significantly among the 4 samples, suggesting that overall, the relationship between UGPA and QE scores was modest but consistent. For all 4 samples, UGPA indirectly predicted NAVLE scores through QE scores, VGPA, or both.

Similar to UGPA, VGPA reflects both domain knowledge (ie, knowledge and skills specific to the domain of veterinary medicine, including basic sciences, surgery, and medicine) and other diverse characteristics related to succeeding in an academic environment. As we had hypothesized, VGPA directly predicted NAVLE scores, indicating that for students included in the study, at least some factors that contributed to performance in the academic program also contributed to performance on the licensing examination.

The QE tests knowledge in the specific domains of veterinary anatomy, physiology, pharmacology, microbiology, and pathology. Knowledge in these areas is thought to be foundational to the practice of clinical medicine. Not surprisingly, therefore, QE scores were the strongest overall predictor of NAVLE scores for all 4 samples in the present study.

Overall, the full model in the present study explained between 58% and 66% of the variation in NAVLE scores, with all measures of student performance that we examined contributing directly or indirectly to the model. Thus, 42% to 34% of the variation in NAVLE scores was unexplained by our model, meaning that some students did better on the NAVLE than one might have expected on the basis of their grades (UGPA and VGPA) and scores on other examinations (GRE and QE). Similarly, some students who did well on these measures did not do as well as might have been expected on the NAVLE. Other factors may account for the unexplained variation in student performance on the NAVLE, such as personal issues (eg, stressful life events or burnout occurring at the same time as the NAVLE or QE) and variability among test forms for the NAVLE, differences in the time that students spent in preparing for the NAVLE, and differences in elective coursework among students. Future studies should be performed to identify more specifically what constructs the QE and VGPA measure and what additional constructs would be useful in explaining performance on the NAVLE.

There were a number of limitations of the present study. First, although a few students indicated that they did not take the QE seriously at all, it was not possible to exclude data for those students from the analysis because the survey was administered anonymously. It is likely that if these data had been excluded, a source of random variation would have been eliminated, thereby strengthening the measured correlation between QE scores and other measures. However, the fact that the association between QE and NAVLE scores was strong and consistent for all 4 samples suggests the QE was a reliable and valid measure in this study.

Second, entry requirements for GRE score and UGPA at ISU and UMN may have caused range restriction, thereby influencing the results of our study. Veterinary students are likely to have higher than average scores on the GRE and a higher than average UGPA, and range restriction for these variables may have contributed to lower associations among these measures of preveterinary student performance and subsequently measured variables (VGPA, QE scores, and NAVLE scores) in our path analysis model and may help account for the nonsignificant association between UGPA and GRE scores. Nonetheless, results were similar for all 4 samples in the present study. Therefore, we suspect that range restriction did not affect one variable more than the other, although we cannot rule this out definitively.

Third, because of the limited sample size, we omitted several variables that are commonly included in predictive studies of academic success in medical education, including gender and race.7,15 Future studies employing larger samples or different variables could explore such variables. Additionally, future studies could explore relationships with scores for subsections of the GRE or QE.

Fourth, although inclusion of 2 years' data from 2 institutions strengthens the possibility that these findings might generalize to other institutions, there are possible systematic differences among schools or across years that were not detected in the present study. Therefore, replication studies are warranted.

Fifth, more research must be done to address the dependent variable of greatest interest to veterinary schools and the veterinary profession: professional success. Each measure examined in the present study represented a variety of types of knowledge, skills, and abilities. For example, all the standardized instruments that we examined measure domain-specific verbal knowledge and intellectual skills as well as the domain-neutral ability to perform well on multiple-choice tests. For these measures to truly be useful for informing curricular change, more studies exploring clear relations among these measures and important constructs such as basic science knowledge, clinical ability, and professional success should be established as explicitly as possible. For example, the QE is intended to measure basic science knowledge, whereas the NAVLE is intended to measure clinical readiness. Hence, the fact that QE scores predict NAVLE scores could mean that mastery of basic science knowledge and skills is important to clinical readiness. If that were the case, curricular changes that improve students' knowledge in the basic sciences should improve their QE scores, which should in turn improve their NAVLE scores, resulting, ultimately, in better practitioners. The fact that this proposition is currently unproven illustrates the need for additional studies linking curricular measures with professional success. Ultimately, our goal as educators of future veterinarians should be that when we change our curricula in ways that produce improvement on immediately available measures (such as grades or standardized examination scores), we are also improving the odds that our graduates will succeed as veterinary professionals.

Findings of the present study support the practice of using overall GRE scores and UGPA (either overall or for prerequisite courses) when making admissions decisions for AVMA-accredited veterinary schools, assuming that NAVLE scores are a valid indicator of clinical preparation. Of the measures evaluated in this study, the most promising single measure for predicting NAVLE scores was QE scores, which explained 4 to 5 times as much of the variation in NAVLE scores as did the other predictors that were examined. Furthermore, the explanatory power of QE scores is more likely to hold true across multiple institutions than is the explanatory power of VGPA, which was the second most promising predictor, because of the many factors likely to cause variations in VGPA from one institution to the next, including grading philosophy, the presence or absence of institutionally mandated grade scales, and the greater or lesser weight given to various disciplines within the curriculum. In contrast, the QE is a stable, standardized measure that should provide an equivalent estimate of students' basic science preparation regardless of grading philosophy or other institutional idiosyncrasies.

Our study did not investigate why QE scores were a stronger predictor of NAVLE scores than were GRE scores, UGPA, and VGPA. There are at least 2 possible reasons for this strong association. First, the QE measures knowledge and skills in the basic sciences, and those skills are foundational to success on the NAVLE. Second, the QE is a standardized, multiple-choice test, similar in format to the NAVLE. It is reasonable to expect that both of these factors would strengthen correlations between the 2.

ABBREVIATIONS

GRE

Graduate Record Examination

ISU

Iowa State University

NAVLE

North American Veterinary Licensing Examination

PAVE

Program for the Assessment of Veterinary Education Equivalence

QE

Qualifying Examination

UGPA

Undergraduate grade point average

UMN

University of Minnesota

VGPA

Overall grade point average during veterinary school

a.

LISREL 8.54, Scientific Software International Inc, Lincolnwood, Ill.

b.

SPSS 17.0, SPSS Inc, Chicago, Ill.

References

  • 1.

    Kuncel NRWee SSerafin L, et al. The validity of the Graduate Record Examination for master's and doctoral programs: a meta-analytic investigation. Educ Psychol Meas 2010; 70: 340352.

    • Search Google Scholar
    • Export Citation
  • 2.

    Confer AW. Preadmission GRE scores and GPAs as predictors of academic performance in a college of veterinary medicine. J Vet Med Educ 1990; 17: 5662.

    • Search Google Scholar
    • Export Citation
  • 3.

    Confer AWLorenz MD. Pre-professional institutional influence on predictors of first-year academic performance in veterinary college. J Vet Med Educ 1999; 26: 1620.

    • Search Google Scholar
    • Export Citation
  • 4.

    Collin VTViolato CHecker K. Aptitude, achievement and competence in medicine: a latent variable path model. Adv Health Sci Educ Theory Pract 2009; 14: 355366.

    • Search Google Scholar
    • Export Citation
  • 5.

    Ferguson EJames DMadeley L. Factors associated with success in medical school: systematic review of the literature. BMJ 2002; 324: 952957.

    • Search Google Scholar
    • Export Citation
  • 6.

    White CBDey ELFantone JC. Analysis of factors that predict clinical performance in medical school. Adv Health Sci Educ Theory Pract 2009; 14: 455464.

    • Search Google Scholar
    • Export Citation
  • 7.

    Wilkinson DZhang JByrne GJ, et al. Medical school selection criteria and the prediction of academic performance. Med J Aust 2008; 188: 349354.

    • Search Google Scholar
    • Export Citation
  • 8.

    Evans PWen FK. Does the Medical College Admission Test predict global academic performance in osteopathic medical school? J Am Osteopath Assoc 2007; 107: 157162.

    • Search Google Scholar
    • Export Citation
  • 9.

    Hamdy HPrasad KAnderson MB, et al. BEME systematic review: predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach 2006; 28: 103116.

    • Search Google Scholar
    • Export Citation
  • 10.

    Pretz JENaples AJSternberg RJ. Recognizing, defining, and representing problems. In: Davidson JESternberg RJ, eds. The psychology of problem solving. Cambridge, England: Cambridge University Press, 2003; 330.

    • Search Google Scholar
    • Export Citation
  • 11.

    Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med 1994; 69: 883885.

  • 12.

    Chang RWBordage GConnell KJ. The importance of early problem representation during case presentations. Acad Med 1998; 73: S109S111.

  • 13.

    Mertler CVannatta R. Advanced and multivariate statistical methods: practical application and interpretation. 3rd ed. Glendale, Calif: Pyrczak Publishing, 2005.

    • Search Google Scholar
    • Export Citation
  • 14.

    Byrne BM. Structural equation modeling with LISREL, PRELIS, and SIMPLIS: basic concepts, application, and programming. Mahwah, NJ: Erlbaum, 1998.

    • Search Google Scholar
    • Export Citation
  • 15.

    Yates JJames D. Predicting the “strugglers”: a case-control study of students at Nottingham University Medical School. BMJ 2006; 332: 10091013.

    • Search Google Scholar
    • Export Citation
  • 16.

    Satorra ABentler R Scaling corrections for chi-square statistics in covariance structure analysis, in Proceedings. Bus Econ Statist Sect Am Stat Assoc 1988;308313.

    • Search Google Scholar
    • Export Citation

Contributor Notes

Address correspondence to Dr. Danielson (jadaniel@iastate.edu).