Laparoscopic spay procedures are becoming increasingly common in veterinary practice, and laparoscopic ovariectomy is one of the most commonly performed minimally invasive companion animal surgeries in the United States.1 The popularity of these procedures is partly attributable to an increased availability of laparoscopy equipment as well as the benefits of minimally invasive surgery such as decreased signs of postoperative pain, less surgical tissue trauma, and faster return to normal activities, compared with more invasive open procedures.2–9 Despite these advantages, it is well accepted that the skills required for laparoscopic procedures are not directly transferable from open surgery experience, and specific training is needed to adapt to the required depth perception, fulcrum effect, and limited tactile feedback.10–12 For laparoscopic ovariectomy, it has been estimated that completion of 80 procedures is required for small animal surgeons to overcome the initial learning curve and achieve proficiency.13
To overcome the obstacle of a steep learning curve in human surgery, simulation-based skills training curricula have been incorporated into the physician residency training program since 2008.14–16 Since 2009, the American Board of Surgery has required all residents to pass an examination in fundamentals of laparoscopic surgery, further emphasizing the importance of simulation training and assessment. Although simulation training in veterinary medicine is still in its infancy, an increasing amount of attention focused on simulation training and assessments has been reported in recent years.6,17–21
In a recently published study,6 approximately 14 hours of basic skills training increased veterinary students' confidence level in performing laparoscopic ovariectomy, although in the same study, 6 of 8 evaluated laparoscopic procedures had to be completed by the supervising surgeon owing to concerns related to prolonged anesthesia time. Specific procedure-based training may be required to enhance surgical performance. Although laparoscopic spay procedures have gained popularity, specific simulated training for this procedure is not yet available.
Simulation training involving the use of cadavers and benchtop models as well as live animal surgeries have been extensively used in the human and veterinary medical fields to develop and refine surgical skills.14,18,22–31 For minimally invasive procedures, various modalities have been incorporated into training programs. This includes the use of low-fidelity benchtop models, such as those used in MISTELS tasks, that have been adopted and validated for training and skills assessment in veterinary medicine.18,20,32 High-fidelity simulations including virtual reality, augmented reality, and cadaveric surgical procedures have also been validated as training tools in both human and veterinary medicine,33–39 yet the expense and currently limited availability of the computer-based simulators can prevent their use in veterinary medicine training programs. Furthermore, before a simulator can be integrated into an educational program, validation is imperative.14 The OSATS is a validated assessment method of general surgical performance, which is composed of 5 to 6 categories assessed by use of a 5-point Likert scale.32,40–42 This assessment tool can be used for any type of surgical performance evaluation, including surgical procedures on human patients, live animal surgeries, and simulated surgery involving the use of models.40,41,43–46 Several studies6,17,32 in veterinary medicine have included application of a modified OSATS to evaluate surgical performance when a simulator is used or during actual surgical procedures and showed the construct validity of these methods. Construct validity refers to whether a test accurately measures or correlates with the theorized construct that it purports to measure (ie, that the simulation can accurately demonstrate differences among populations with different levels of a specific skill).47 Other means of assessment include concurrent validity (measuring the extent to which results of the test in question match those of an established test for the same construct) and face validity (subjectively determining whether the test appears to evaluate what is intended)14,47 evaluations.
In our opinion, the need for low-cost, high-fidelity, validated simulation models in veterinary medicine is presently not being met. Moreover, none of the current models feature portal placement simulation, despite the fact that iatrogenic trauma during laparoscopic entry is the most commonly cited complication and is the most common reason for conversion to open surgery.4,7,8,13,48
The objectives of the study reported here were to describe the development of a high-fidelity canine laparoscopic ovariectomy model for surgical simulation training and testing and to assess the construct, concurrent, and face validity of the model. We hypothesized that the model would have construct validity when used by individuals in veterinary training or practice who have various degrees of laparoscopic surgical experience. We also hypothesized that concurrent validity would be confirmed when performance results were compared with modified MISTELS scores (ie, basic laparoscopic skills test scores) and OSATS for surgical performance.
Materials and Methods
Study participants
The study protocol was approved by the Washington State University Institutional Review Board. A convenience sample of veterinary students from Washington State University College of Veterinary Medicine in the preclinical years (years 1 and 2) of their education, with little or no exposure to minimally invasive surgery or simulation training (novice group), was recruited to participate in the study on a voluntary basis. In addition, ACVS board-certified small animal or equine surgery specialists with extensive experience in minimally invasive surgery (experienced group) and veterinarians with experience in minimally invasive surgery as a primary surgeon or assistant but who had participated in ≤ 10 procedures/y or had completed simulation training through a small animal or equine ACVS residency program (intermediate group) were recruited from the same institution and from a veterinary referral hospital (WestVet Veterinary Specialty Clinic, Boise, Idaho). All subjects enrolled in the study provided informed consent prior to their participation and completed a questionnaire regarding their minimally invasive surgery experiences.
The experience groups of nonstudent participants were determined by self-estimation of the number of laparoscopic procedures performed and the number of years in which participants had been performing minimal invasive surgery as well as by use of a VAS for self-estimated experience level, in which 0 cm indicated the participant was a novice, 5 cm indicated having performed ≥ 10 procedures as a primary attending clinician, and 10 cm indicated that the participant was a board-certified specialist with weekly experience in performing a variety of minimally invasive surgical procedures within the past 3 years.
Basic laparoscopic skills assessments
Basic laparoscopic skills were assessed on the basis of MISTELS28,30,49 tasks. Participants were required to read written instructions and watch video recordings demonstrating each of 3 tasks, which included peg transfer (movement of triangular plastic pegs from the nondominant-hand side of a pegboard to the other and reversal of the process), pattern cutting (of a 4-cm-diameter circle that was marked on a 10 × 15-cm piece of instrument wrapping material suspended between alligator clips), and ligature loop placement (placing a ligature of pretied suture with a 4S modified Roeder knot to produce a 10-cm-diameter loop over a mark indicated on a foam appendix) using laparoscopic tools inside a training box.a A few minutes of instrument handling outside the training box was allowed prior to assessment to enable some familiarity with the instruments in use. If a participant had questions about the tasks or instrumentation, the examiner (C-YC) would answer these questions immediately prior to the skills assessment. However, warmup exercises were not allowed immediately prior to testing. When the assessment had started, no more coaching was performed.
Assessments were performed with the benchtop training modela connected to a television screen. The scores were assigned as follows. For the pegboard transfer, the task was scored on the basis of time (seconds) for completion, with a 300-second cutoff time and a penalty for pegs dropped outside the view of the camera. The score for this task was calculated as follows: score = (300 – time for completion of task) – ([50 × number of dropped pegs on first transfer] + [25 × number of dropped pegs on second transfer]). When time exceeded 300 seconds, the exercise was stopped and a score of 0 assigned. Pegs dropped inside the visual field were not penalized other than the added time required to pick up the peg with the instrument used when dropping it and to complete the exercise. If a peg was dropped outside the field, it was returned by the observer to the original peg pin, and the participant was required to repeat the exercise for this peg. The pattern-cutting task was scored on the basis of time (seconds) for completion, with a 300-second cutoff time and a penalty determined according to the percentage area that the cut pattern deviated from the marked circle (ie, score = [300 – time for completion of task] – [percentage deviation from marked circle]). If time exceeded 300 seconds, the exercise was stopped and a score of 0 was assigned. For ligature loop placement, the task was also scored on the basis of completion time (seconds), with a 180-second cutoff time and a penalty based on the distance in millimeters by which the loop was away from the mark (ie, score = [180 – time to completion] – [distance from mark]). If time exceeded 180 seconds, the exercise was stopped and a score of 0 was assigned.18 All of the tasks were evaluated by the same examiner (C-YC). After calculation of the score for each individual task, the scores were normalized to previously reported expert level scores29 (ie, percentage of expert-level scores converted to an absolute value). A score of 100 for each task was considered expert level. A total basic laparoscopic skills score was calculated by adding the 3 task scores (range of possible scores, 0 to 300). All participants were blinded to the final calculated scores.
Model creation
A commercially available simulated canine abdomen modelb with a custom-made simulated peritoneum at the abdominal entry site was used. Custom-made simulated ovarian and uterine tissue was added to other simulated organs in the abdomen model. A simulated small intestine made of silicone rubberc was created to replace the existing small intestine model in the commercial device to improve the handling characteristics.
The laparoscopic ovariectomy model simulated the genital tract anatomy of a medium-sized bitch. A mold was made from plaster of Paris by use of a clay model originally made by one of the authors (ME). Liquid silicone was poured into the mold to create the simulated ovaries and uterus. Silicone-soluble dye was added (0.1 mL for red and 0.05 mL for lighter organ [pink] color) into 75 mL of liquid silicone for uterine and ovarian tissue, with 175 mL of white-colored silicone used to simulate ligaments and fat. The dye-and-silicone mixture was allowed to rest in the mold for ≥ 6 hours to form the simulated genital tract. Once removed from the mold, 3 dissection marks were drawn on the model's suspensory ligament, proper ligament, and ovarian pedicle, each 2 cm away from the ovarian tissue (Figure 1). The custom model was then placed in the described canine abdomen modelb with the simulated suspensory ligament attached to the left side of abdominal wall by a self-adhesive hook-and-loop fastenerd (Figure 2). The ovarian pedicle was secured with an alligator clip extended from the cranial midline of the abdomen model.
Instrumentation and laparoscopic entry simulation
All instruments used in the model were standard laparoscopic instruments. These included threaded, reusable cannulase for portal establishment; a 10-mm, 30° rigid endoscopef; a 5-mm blunt probe and 5-mm curved dissecting and grasping forcepsg; and a 10-mm vessel-sealing device.h
A 3-layer latex sheeti adhered to a 2 × 2-inch neoprene pad was used to model a portion of abdominal wall with a peritoneal lining, and this model was sutured onto one of the portal sites of the abdomen model (Figure 2), where a reusable threaded cannulae would be used for portal establishment and abdominal model entry. A 2 × 2-inch pressure sensing paperj was stapled on top of the tail of the spleen located immediately ventral to the portal site to detect any simulated injury during the abdominal model entry. The spleen was placed approximately 2 cm away from the abdominal model wall. A high-definition webcamk was installed in the cranial part of the abdomen model, and a real-time video recording was made during entry through the portal.
SLO procedure
The SLO consisted of laparoscopic entry and left ovariectomy simulation. Before performing the task, all participants were asked to review an instructional video demonstrating the procedure and step-by-step written instructions for the SLO procedure. If a participant had questions after this review, the proctor (C-YC) answered questions about the tasks or instrumentation immediately prior to assessment. Warm-up exercises were not allowed immediately prior to testing.
The canine abdomen model was placed in an approximately 12° Trendelenburg position. The model had 6 premade holes mimicking the portal sites, and 4 of these (1 for the endoscope, 1 for the threaded cannula, and 2 used as instrument portals) were used in the SLO procedure (Figure 3). For initial abdominal entry, a threaded, reusable cannulae was placed at one of the portal sites on the simulated left abdominal wall, and gentle pressure was applied to rotate the cannula in a clockwise direction through the mock peritoneum. The participant was allowed to use a blunt instrument (probe) to verify entry into the model abdominal cavity if desired. After the expected entry, the cannula was removed and a 30°, 10-mm endoscopef was inserted in the middle portal of a row of 3 placed along the midline, 2 cm distal to the mock umbilicus. A 3-portal technique was chosen in which portal sites cranial and caudal to the endoscope were used for instrument placement.
Participants used the laparoscope to locate the site of the left ovary, which was partially obscured by the spleen or small intestines in the model. A blunt probe was used to retract the organs and expose the ovary with clear visualization of the dissection marks. After identifying the dissection marks, the laparoscope was stabilized on a stand with the camera focused on the ovary and the surgery site. Grasping forcepsg and a 10-mm vessel-sealing deviceh were used for dissection of the ovarian tissue along the 3 dissection marks, starting at the suspensory ligament, continuing at the ovarian pedicle, and finally transecting the ovarian bursa from the proper ligament. After dissection of the left ovarian tissue, the ovary was retrieved with the grasping forcepsg through one of the portals. If the tissue was dropped during retrieval, the participant had to relocate the tissue and exteriorize the ovary. Evaluation time started when the cannula contacted the model abdominal wall and stopped when the ovary was exteriorized or if the 10-minute procedure cutoff time, preselected on the basis of previously reported surgery times for laparoscopic ovariectomy in dogs,3,4,8 was reached.
Surgical performance evaluation during SLO
The SLO performance was evaluated by 2 approaches. The first was measurement of total procedure time and documentation of errors (deviation in millimeters from the marked lines along the ovarian tissue and whether splenic puncture occurred [recorded as yes or no] in the model). To identify splenic puncture, the pressure-sensor paper was examined in combination with the real-time video recording after completion of the procedure.
The second approach was application of an OSATS, which included a GRS used with a 10-cm VAS as previously described32 with minor modifications. The OSATS also included task-specific criteria assessment in an OCRS used with a 10-cm VAS as previously described,32 but with novel rating criteria designed for purposes of the present study.l
Briefly, the GRS comprised 6 domains (triangulation skills, depth perception, bimanual dexterity, efficiency, tissue handling, and instrument use) that were rated by use of anchor descriptors from 1 (reflecting the lowest level of ability) to 10 (reflecting the highest level of ability).32 The total score for the GRS component of the OSATS ranged from 0 to 60. The OCRS component, rated in the same manner, consisted of 4 domains: cannula placement, structure identification (kidney, ovary, and related vessels), splenic retraction, and ovariectomy. The structure identification skill was divided into 2 components: anatomy identification and camera-operating skills. The ovariectomy domain was divided into 4 components (including effective use of grasping forceps, systematic and effective dissection, appropriate placement of the cutting and sealing device, and successful retrieval of the ovary without dropping) that were rated in a similar manner. Thus, the total score for OCRS ranged from 0 to 80.
The 2 evaluators were an experienced ACVS diplomate with extensive experience in laparoscopic procedures (BAF) who was involved in evaluating the novice group and a trained proctor (C-YC) who evaluated all the participants. The OSATS results from the 2 evaluators were analyzed individually for each participant, and the evaluators were blinded to each other's scoring of subject performance. For practical reasons, the evaluators could not be blinded to the experience level of participants. For all data analysis except interrater agreement, the scores from one of the proctors (C-YC) were used.
The face validity questionnairel was also completed by use of a 10-cm VAS for which 0 cm represented the lowest value and 10 cm represented the highest value. This questionnaire included 6 questions for rating the overall model experience (including the visual realism of various model organs, interaction between instruments and model objects, difficulty of the simulated procedure as compared with actual surgery, similarity of haptic feedback for model versus real tissues, and usefulness of the model for teaching). It also included 4 questions for rating the surgical procedure (the degree to which the participant felt abdominal entry, retraction of organs, and dissection were lifelike and similarity of cutting action for the vessel cutting-and-sealing device used in the model, compared with that during surgery). This questionnaire was completed by all participants with laparoscopic experience (the experienced and intermediate groups combined) immediately after the SLO.
Correlations between measures of experience level (total procedures performed [calculated as the self-estimated number of laparoscopic procedures performed per year multiplied by the number of years' experience] and self-estimated experience level [as determined with the VAS]) and the results of SLO performance (ie, laparoscopic entry time, splenic puncture error, ovariectomy time, and dissection error) were used to assess construct validity. For this evaluation, SLO completion time was considered the primary variable of interest. Concurrent validity was assessed by correlating the results of SLO performance with basic laparoscopic skills and OSATS (GRS and OCRS) scores.
Statistical analysis
Normally and nonnormally distributed data are reported as mean ± SD or median and interquartile range, respectively. For continuous variables, data distribution was assessed with the D'Agostino-Pearson omnibus K2 test. Correlations between variables were determined by use of the Pearson method (and Pearson t test) for normally distributed data and the Spearman rank test for nonnormally distributed data. One-way ANOVA followed by a Tukey multiple comparison test was performed to compare SLO completion times and basic laparoscopic skills scores among participants grouped by experience (ie, novice, intermediate, and experienced groups). Interrater agreement for OSATS scores was evaluated by correlation analysis, including determination of the Cronbach α. Cronbach α values > 0.9 were considered excellent agreement between raters. Binominal data were analyzed with a Fisher exact test. All statistical calculations were performed with statistical software.m,n Values of P ≤ 0.05 were considered significant.
Results
Twenty-six participants were enrolled in the study. The novice group comprised 15 veterinary students. The experienced group consisted of 6 ACVS diplomates (4 small animal surgeons and 2 large animal surgeons). The intermediate group included 4 residents in veterinary surgery (3 in small animal surgery and 1 in large animal surgery) and 1 surgery intern.
None of the 15 participants in the novice group had any hands-on experience in laparoscopic surgery or simulation training. The median number of years' experience with laparoscopic surgery for intermediate group participants was 2 (range, 1 to 4 years), with a median of 7.5 procedures/y (range, 3.75 to 10 procedures/y). Experienced group participants had a median of 13 years of experience in laparoscopic and thoracoscopic surgery (range, 6 to 22 years) and performed a median of 18 procedures/y (range, 4.5 to 33.3 procedures/y). The self-estimated experience level was significantly (P = 0.008) higher in the experienced group than in the intermediate group (mean ± SD VAS scores, 9.3 ± 0.7 and 3.82 ± 2.7, respectively).
Basic laparoscopic skills assessment
All 26 participants completed the basic laparoscopic skills tasks. Results for all groups are depicted (Figure 4). Differences were detected among groups (P < 0.001), with the novice group scoring lower (mean ± SD, 94.5 ± 45; range, 26 to 166) than the intermediate (mean ± SD, 221 ± 58; range, 134 to 274) and experienced (mean ± SD, 210 ± 43; range, 158 to 248) groups. No significant (P = 0.93) difference was detected between the intermediate and experienced groups. On Spearman analysis, self-reported laparoscopic experience level (by use of the VAS) and basic laparoscopic skills score were significantly (P = 0.016) correlated (rS = 0.650). When the total self-estimated number of procedures performed was used, a slightly lower but significant (P = 0.03) correlation (rS = 0.422) was found.
SLO construct validity
One participant in the novice group was not successful in the attempted initial abdominal entry (ie, placement of the threaded cannula) for the model. The time required for abdominal entry for the remaining 25 participants was not correlated with experience on Pearson analysis (r = 0.09; P = 0.76) or with self-estimated laparoscopic experience determined by VAS scoring (r = −0.14; P = 0.63,). Six participants (5 in the novice group and 1 in the intermediate group) punctured the spleen on abdominal entry. However, no significant (P = 0.484) difference was found for this variable among groups. Three of 15 participants in the novice group completed the SLO within 10 minutes, whereas 5 of 5 and 5 of 6 participants in the intermediate and experienced groups, respectively, completed the procedure in this time, with a significant (P = 0.001) difference among groups. The rate of completion within the preset time was higher for the intermediate and experienced groups than for the novice group (P = 0.004 and P = 0.014, respectively) but did not differ between the intermediate and experienced groups (P = 1.0). The SLO completion time varied significantly (P < 0.001) among groups. Completion times for the novice (median, 578 seconds; n = 3) and intermediate (median, 545 seconds; 5) groups were significantly (P = 0.013 and 0.015, respectively) longer than for the experienced group (median, 290 seconds; 5), but there was no difference (P = 0.857) in completion times between the novice and intermediate groups (Figure 5). The SLO procedure time was negatively correlated with the total self-estimated number of laparoscopic procedures performed (rS = −0.609; P = 0.027) and with self-estimated laparoscopic experience level as determined by VAS scoring (rS = −0.626; P = 0.022).
Dissection error measurement (deviation from the mark) was 9.6 ± 5.36 mm, 5.2 ± 1.52 mm, and 4.2 ± 1.15 mm for the novice, intermediate, and experienced groups, respectively. This variable was not correlated with self-estimated laparoscopic experience by VAS scoring (rS = −0.21; P = 0.48).
SLO concurrent validity
Basic laparoscopic skills score was significantly (P = 0.05) and negatively correlated (r = −0.552) with SLO completion time. In surgical performance assessments, mean ± SD GRS scores were 11.63 ± 5.53, 38.9 ± 3.39, and 50.6 ± 1.56 for the novice, intermediate, and experienced groups, respectively. The mean ± SD OCRS scores were 20.5 ± 2.03, 53.3 ± 2.74, and 66.9 ± 3.67 for the novice, intermediate, and experienced groups, respectively. Correlation analysis of ratings assigned by the 2 investigators revealed significant correlations for both GRS scores (rS = 0.964; P = 0.003) and OCRS scores (rS = 0.821; P = 0.034). The Cronbach α values for GRS and OCRS interrater agreement were 0.957 and 0.966, respectively, indicating excellent agreement. The GRS and OCRS scores were also significantly correlated with basic laparoscopic skills scores (GRS, r = 0.735 [P = 0.004]; OCRS, r = 0.70 [P = 0.008]) and SLO time (GRS, r = −0.624 [P = 0.023]; OCRS, r = −0.624 [P = 0.022]). Neither dissection error measurement nor the occurrence of splenic puncture was correlated with GRS or OCRS scores (P = 0.16 and P = 0.22, respectively).
SLO face validity
The face validity questionnaire was completed by all 11 participants in the intermediate and experienced groups. For overall experience, usefulness of the model was rated the highest mean score on the 10-cm VAS (8/10), followed by realism of interactions between instruments and model objects (7.2); ovary realism (6.8); abdominal organ realism (6.5); difficulty of the procedure, compared with actual surgery (6.3); and haptic feedback (6.2). For the surgical procedure, the highest score was assigned for the degree to which instrument cutting action was lifelike or similar to actual surgery (6.8/10), followed by the same measures for tissue dissection (6), abdominal entry (5.6), and retraction of organs (4.8). The overall measure of face validity for the model (sum of all scores) was 64.2/100.
Discussion
The significant correlation between surgical complet ion ti me for the SLO and self-estimated laparoscopy experience level indicated construct validity of the SLO model investigated in this study. A similar correlation between basic laparoscopic skills scores and SLO completion time indicated concurrent validity of the model. Furthermore, SLO completion time was significantly correlated with results for the OSATS suggesting that the model may be useful as an evaluation tool for laparoscopic surgical performance.
Basic laparoscopic skills tasks are considered the gold standard for laparoscopic training and assessment, and this training has been repeatedly shown to improve laparoscopic skills in human and veterinary medicine.18,20,27,29,30,50,51 When assessing all groups (novice, intermediate, and experienced), a significant positive correlation was found between experience level and basic laparoscopic skills score, reflecting the previously determined construct validity of basic laparoscopic skills tasks. However, we found no difference in these scores between the intermediate and experienced groups. We suspected that the discrepancy in this result may have been attributable to the fact that surgical residents at our institution are trained in MISTELS tasks as part of their curriculum and therefore are familiar with those tasks. All residents (n = 4) in the intermediate group had recently completed a simulation training curriculum at the time of this project, which likely resulted in a high level of basic laparoscopic skills. On the other hand, SLO completion time was significantly and negatively correlated with 2 self-reported measures of experience (total self-estimated number of procedures performed and laparoscopic experience level by VAS), suggesting that the model may be useful to discriminate performance levels even in a population of veterinarians with various degrees of experience after laparoscopic skills training. This finding also supported the generally accepted notion that surgical performance is only partly comprised of psycho-motor laparoscopic skills. Surgical decision-making skills are not specifically addressed by MISTELS-type training.
We also observed that self-estimated laparoscopic experience by use of the VAS was more strongly correlated with basic laparoscopic skills score (rS = 0.650) than was the self-estimated total number of laparoscopic procedures performed (r = 0.422). Some reports have indicated that self-reported experience is more subjective and less reliable, compared with hospital records,32,35 and surgeons consistently overestimated their own performance when compared with trained raters who used the same global assessment evaluation system.52 Our findings suggested that the VAS might be a better method than estimated case numbers in determining laparoscopic experience level if actual case numbers cannot be determined.
High-fidelity models that mimic actual surgical procedures have been extensively studied and developed in human medicine for laparoscopic simulation training and assessment. Some of these are combined with virtual reality, augmented reality, and cadaveric or live animal surgeries for specific procedures (eg, cholecystectomy in healthy pigs).22,25,26,52–55 One of the advantages of high-fidelity simulation models is the opportunity for participants to practice surgery in a low-pressure setting. The concepts of laparoscopic skills assessment and training are receiving more attention in veterinary medicine, and several recent studies17,21,27,35 have reported the use of different simulators for training and assessment. However, to our knowledge, this was the first high-fidelity simulation model reported for small-animal laparoscopic procedures. The first simulated canine ovariohysterectomy model was developed by Griffon et al56 in 2000 for open procedures, and the concepts were used to simulate anatomy or hemodynamics for teaching soft tissue surgery. In our model, we believe the simulated ovarian and uterine tissue achieved the purpose of simulating the anatomy, yet a limitation of our model was the lack of hemodynamic simulation (ie, bleeding during dissection). We considered that this limitation might have been minor, as the use of a vessel-sealing device provides good hemostasis and decreases the risk for hemorrhage during laparoscopic ovariectomy, compared with results for an open procedure.6,57
Studies14,47,58 have shown it is inadvisable to use a training model before its validity as an educational tool is proven. Different validation methods have been discussed in the literature, including subjective or objective methods. For subjective (or face) validity, surgeons with expertise in a task or set of tasks are asked to assess similarity of the model to the actual surgical situation. However, objective approaches such as construct, concurrent, and predictive validation are also required for evaluation of simulators.14,47 Objective validity methods determine whether a simulator can discriminate between different levels of expertise (construct validity), whether new simulator scores compare with those of the gold standard regarding the participants' performance (concurrent validity), or whether the effects of simulator training transfer into procedures performed on patients in the operating room, on a cadaver, or with a substitute surgical model (predictive validity).14 When validating a new simulation model, precision (error measurement) and speed are the most commonly used evaluation criteria. Our results revealed a significant correlation between SLO completion time and self-estimated experience level (as measured with VAS), yet none of the error measurements were correlated with self-estimated experience. The main purpose for setting the dissection marks as an error measurement was to identify whether precision of dissection correlated with experience, and we expected that more-experienced surgeons would have smaller error measurements. One possible explanation for the inability to identify such an association may have been that only 3 participants in the novice group completed the procedure and that the small sample size influenced these results. Despite this lack of correlation, the OCRS scores indicated that the experienced group had less erratic and more precise instrument movement and were more efficient in dissecting the tissue, as reflected by the higher scores, compared with the novice group. We considered that, in laparoscopic ovariectomy of a live dog, the area for dissection would be broader than that provided in the model. Moreover, ovarian tissue is often embedded in adipose tissue, further obscuring the tissue to be dissected. Therefore, the 2-cm mark used to measure dissection error might not have been a good criterion for evaluating this skill. For training purposes, the dissection lines may remain useful to allow practice in precision cutting. To our knowledge, however, the most commonly used benchtop laparoscopic box trainers share the limitation that only time and error measurements such as this can be used to evaluate performance. Conversely, virtual reality and augmented reality motion metrics may provide more detailed assessment of surgical performance by analyzing instrument movements. Therefore, further investigations to combine this model with a virtual reality system and advanced motion metrics may be warranted.
Currently, 2 main methods are used to gain laparoscopic portal entry to the peritoneal space: an open technique that requires direct surgical incision and dissection of the abdominal wall (Hasson technique) and the Veress needle technique and involves a blind insertion of the Veress needle into the abdominal cavity for intra-abdominal carbon dioxide inflation, followed by insertion of the first trocar in similarly blind manner. In human medicine, the incidence of iatrogenic vascular or bowel injury from laparoscopic entry ranges from 0.39/1,000 (0.039%) procedures to 4 of 500 (0.8%) procedures.59,60 To the best of our knowledge, no large-scale analysis of such complications has been performed in veterinary medicine, but reported splenic injury has been reported to account for 1 of 30 (3%) to 3 of 16 (19%)61,62 complications during laparoscopic ovariectomy or ovariohysterectomy. In our study, the incidence of splenic puncture in the model did not differ significantly among groups. However, when including all participants, the overall rate of splenic puncture for the novice group was 5 of 15, compared with 1 of 5 in the intermediate group and 0 of 6 in the experienced group. One of the possible explanations for the lack of significant differences was that not all the surgeons in the experienced group were familiar with the chosen entry technique. On inquiry after the study, some of the surgeons indicated that they were more familiar with the Veress needle technique. To our knowledge, there is currently no simulation training available for portal placement and creation of pneumoperitoneum, and our model appeared to be the first for portal establishment. The reason that a reusable threaded cannula was chosen for this procedure was that it was the surgeons' preference in the authors' institution. Whether there are benefits of using one entry technique versus another remains controversial.60,63–67 Another possibility for the lack of difference between groups is that we positioned the spleen in the model approximately 2 to 3 cm away from the abdominal wall, which is generally greater than the distance used in live anesthetized animals. If the spleen had been in immediate contact with the body wall as may often be the case in dogs undergoing laparoscopic ovariohysterectomy, the puncture rate among novices may have been higher.
In our study, a 10-minute cutoff time was chosen on the basis of several clinical studies,2,3,8,61,68 in which the typical complete surgical time ranged from 29 minutes to 44 minutes.3,4,61 Dupre et al3 recorded a unilateral ovariectomy time (from portal establishment to dissection) to be 5 minutes and 3 seconds with a 1-portal technique and 4 minutes and 21 seconds with a 2-portal technique. A 10-minute completion time therefore seemed to be clinically appropriate for laparoscopic entry and unilateral ovariectomy.
The OSATS results for GRS and OCRS scores independently assigned by the 2 observers were significantly correlated, which indicated the interrater variance was small, as previously reported.32 Interrater agreement for both variables in the present study was excellent. It is worth mentioning that one of the raters did not have extensive experience with minimally invasive surgery, yet the correlation between raters was high, suggesting support for the user of trained raters in the future. This would be advantageous, as experienced laparoscopic surgeons might have limited time available for assessments. In the present study, the OCRS comprised a modified checklist that combined a VAS scoring system with step-by-step procedure evaluations instead of soliciting yes or no responses. This method has previously been validated by our research group32 and has shown a correlation between scores and experience levels, suggesting that this combination is a feasible way to evaluate procedural performance.
Although the postprocedural questionnaire for assessment of face validity reflected lower scores than expected, particularly for features such as procedure difficulty, haptic feedback, and factors associated with realism of the surgery, most of the participants with laparoscopic experience (intermediate and experienced groups) indicated that the model would be useful for training (mean score, 8/10). Perceptions of the model as being less challenging than actual surgery seemed counterintuitive, considering that most of the novices and one of the board-certified surgeons did not complete the SLO within the 10-minute time limit. Nonetheless, the general evaluation for face validity of the model rendered an average evaluation rate of 68%. To the authors' understanding, there are currently no similar face validity rates available for comparison with the results of our model. The rating scale used in this study may be useful for comparisons in future developments of other high-fidelity surgical procedure models.
There were several limitations to the model used in the present study. First, although the Trendelenburg position was used in the model, the inability to tilt the model to the left or right to better mimic actual surgical conditions might have prolonged the surgical time. Although we used the Trendelenburg technique to facilitate organ retraction, we believe it was more difficult to perform organ retraction than in a live dog, owing to the material features of the simulated organs and their inanimate nature. Another limitation was that the laparoscopic entry technique we chose on the basis of the preferred entry method at our institution may not have been representative of the most common entry technique used by veterinary surgeons. This likely affected the validity of performance assessment, and some of the surgeons we tested mentioned that they used other entry techniques in their clinical practice. However, the simulated body wall, which was novel, could easily be used for future investigations of other entry techniques. Although the goal of creating this modified model was to achieve high fidelity, it did not include mimicry of blood flow and hemorrhage, which could potentially have been used to assess hemostasis techniques and allow further evaluation of precision. Such simulation could have also greatly increased the cost of the model. Other limitations included the small sample size of participants in all groups and particularly the number of participants in the novice group who finished the procedure within the preestablished cutoff time. This limited number may have resulted in type 2 errors on analysis of our data. However, the cutoff time was an attempt to link the performance to what is realistic in clinical surgery. Our initial goal was to match the number of participants who had familiarity with laparoscopic surgery (the intermediate plus experienced groups) to the number of novice group participants. Because the study was conducted with participants from 2 teaching institutions, only a limited number of board-certified surgeons and surgical residents were enrolled. Another limitation was that a number of tests were performed for assessment purposes without adjustment for multiplicity. This increases the likelihood of test results being significant. Our results can therefore be considered preliminary, and future studies might be limited for analysis of the best performing assessment method to avoid this possibility. Finally, some of our results, such as basic laparoscopic skills test scores, might have been biased because participants at our facility received MISTELS-like training as part of their residency curriculum.
Overall, we found that the low-cost, high-fidelity model created for the study had robust construct and concurrent validity for assessment of laparoscopic skills in our study population, although face validity evaluation reflected that some aspects could be improved. Further studies are needed to determine the usefulness of the model in training programs.
Acknowledgments
Supported in part by the Alternatives Research and Development Foundation. None of the authors of this article had financial or personal relationships with individuals or organizations that could inappropriately influence or bias the content of the paper.
Presented in part at the 13th Annual Meeting of the Veterinary Endoscopy Society, Jackson Hole, Wyo, June, 2016.
ABBREVIATIONS
ACVS | American College of Veterinary Surgeons |
GRS | Global rating scales |
MISTELS | McGill inanimate system for training and evaluation of laparoscopic skills |
OCRS | Operative component rating scales |
OSATS | Objective structured assessment of technical skills |
SLO | Simulated laparoscopic ovariectomy |
VAS | Visual analog scale |
Footnotes
FLS trainer box, VTi Medical, Waltham, Mass.
Canine MESI Torso abdomen model, Sawbones, Vashon, Wash.
EcoFlex 0030, Smooth-On Inc, Macungie, Penn.
Velcro industrial strength, Velcro Co, Manchester, NH.
Ternamian EndoTIP cannula, Karl Storz Endoscopy, Goleta, Calif.
30° rigid endoscope, Karl Storz Endoscopy, Goleta, Calif.
Grasping forceps, Karl Storz Endoscopy, Goleta, Calif.
LigaSure Altas 37-cm hand switching laparoscopic instrument, Covidien, Minneapolis, Minn.
Liquid latex, sable, Mehron Inc, Chestnut Ridge, NY.
Fujifilm Prescale Super Low, Fujifilm Corp, Valhalla, NY.
HD Webcam c525, Logitech, Newark, Calif.
Copies of the forms as used in the study are available upon request from the corresponding author.
NCSS 2004 Statistical Software, Kaysville, Utah.
Prism 5, GraphPad Software Inc, La Jolla, Calif.
References
1. Mayhew P. Developing minimally invasive surgery in companion animals. Vet Rec 2011;169:177–178.
2. Culp WT, Mayhew PD, Brown DC. The effect of laparoscopic versus open ovariectomy on postsurgical activity in small dogs. Vet Surg 2009;38:811–817.
3. Dupré G, Fiorbianco V, Skalicky M, et al. Laparoscopic ovariectomy in dogs: comparison between single portal and two-portal access. Vet Surg 2009;38:818–824.
4. Manassero M, Leperlier D, Vallefuoco R, et al. Laparoscopic ovariectomy in dogs using a single-port multiple-access device. Vet Rec 2012;171:69.
5. Naiman JH, Mayhew PD, Steffey MA, et al. Laparoscopic treatment of ovarian remnant syndrome in dogs and cats: 7 cases (2010–2013). J Am Vet Med Assoc 2014;245:1251–1257.
6. Levi O, Kass PH, Lee LY, et al. Comparison of the ability of veterinary medical students to perform laparoscopic versus conventional open ovariectomy on live dogs. J Am Vet Med Assoc 2015;247:1279–1288.
7. Sánchez-Margallo FM, Tapia-Araya A, Díaz-Guemes I. Preliminary application of a single-port access technique for laparoscopic ovariohysterectomy in dogs. Vet Rec Open 2015;2:e000153.
8. Tapia-Araya AE, Díaz-Gũermes Martin-Portugués I, Bermejo LF, et al. Laparoscopic ovariectomy in dogs: comparison between laparoendoscopic single-site and three-portal access. J Vet Sci 2015;16:525–530.
9. Wallace ML, Case JB, Singh A, et al. Single incision, laparoscopic-assisted ovariohysterectomy for mucometra and pyometra in dogs. Vet Surg 2015;44(suppl 1):66–70.
10. Hanna GB, Cuschieri A. Influence of the optical axis-to-target view angle on endoscopic task performance. Surg Endosc 1999;13:371–375.
11. Hanna GB, Shimi SM, Cuschieri A. Task performance in endoscopic surgery is influenced by location of the image display. Ann Surg 1998;227:481–484.
12. Hanna GB, Shimi SM, Cuschieri A. Randomised study of influence of two-dimensional versus three-dimensional imaging on performance of laparoscopic cholecystectomy. Lancet 1998;351:248–251.
13. Pope JF, Knowles TG. Retrospective analysis of the learning curve associated with laparoscopic ovariectomy in dogs and associated perioperative complication rates. Vet Surg 2014;43:668–677.
14. Schout BM, Hendrikx AJ, Scheele F, et al. Validation and implementation of surgical simulators: a critical review of present, past, and future. Surg Endosc 2010;24:536–546.
15. Scott DJ, Cendan JC, Pugh CM, et al. The changing face of surgical education: simulation as the new paradigm. J Surg Res 2008;147:189–193.
16. Van Nortwick SS, Lendvay TS, Jensen AR, et al. Methodologies for establishing validity in surgical simulation studies. Surgery 2010;147:622–630.
17. Tapia-Araya AE, Usón-Gargallo J, Enciso S, et al. Assessment of laparoscopic skills in veterinarians using a canine laparoscopic simulator. J Vet Med Educ 2016;43:71–79.
18. Fransson BA, Ragle CA. Assessment of laparoscopic skills before and after simulation training with a canine abdominal model. J Am Vet Med Assoc 2010;236:1079–1084.
19. Langebæk R, Berendt M, Pedersen LT, et al. Features that contribute to the usefulness of low-fidelity models for surgical skills training. Vet Rec 2012;170:361.
20. Fransson BA, Ragle CA, Bryan ME. Effects of two training curricula on basic laparoscopic skills and surgical performance among veterinarians. J Am Vet Med Assoc 2012;241:451–460.
21. Levi O, Michelotti K, Schmidt P, et al. Comparison between training models to teach veterinary medical students basic laparoscopic surgery skills. J Vet Med Educ 2016;43:80–87.
22. Patel NR, Makai GE, Sloan NL, et al. Traditional versus simulation resident surgical laparoscopic salpingectomy training: a randomized controlled trial. J Minim Invasive Gynecol 2016;23:372–377.
23. Aggarwal R, Crochet P, Dias A, et al. Development of a virtual reality training curriculum for laparoscopic cholecystectomy. Br J Surg 2009;96:1086–1093.
24. Dehabadi M, Fernando B, Berlingieri P. The use of simulation in the acquisition of laparoscopic suturing skills. Int J Surg 2014;12:258–268.
25. Jensen K, Ringsted C, Hansen HJ, et al. Simulation-based training for thoracoscopic lobectomy: a randomized controlled trial: virtual-reality versus black-box simulation. Surg Endosc 2014;28:1821–1829.
26. Tunitsky-Bitton E, Propst K, Muffly T. Development and validation of a laparoscopic hysterectomy cuff closure simulation model for surgical training. Am J Obstet Gynecol 2016;214:392.e1–392.e6.
27. Fransson BA. Advances in laparoscopic skills training and management. Vet Clin North Am Small Anim Pract 2016;46:1–12.
28. Fraser SA, Klassen DR, Feldman LS, et al. Evaluating laparoscopic skills: setting the pass/fail score for the MISTELS system. Surg Endosc 2003;17:964–967.
29. Dauster B, Steinberg AP, Vassiliou MC, et al. Validity of the MISTELS simulator for laparoscopy training in urology. J Endourol 2005;19:541–545.
30. Vassiliou MC, Ghitulescu GA, Feldman LS, et al. The MISTELS program to measure technical skill in laparoscopic surgery: evidence for reliability. Surg Endosc 2006;20:744–747.
31. Sroka G, Feldman LS, Vassiliou MC, et al. Fundamentals of laparoscopic surgery simulator training to proficiency improves laparoscopic performance in the operating room-a randomized controlled trial. Am J Surg 2010;199:115–120.
32. Fransson BA, Ragle CA, Bryan ME. A laparoscopic surgical skills assessment tool for veterinarians. J Vet Med Educ 2010;37:304–313.
33. Seymour NE, Gallagher AG, Roman SA, et al. Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg 2002;236:458–463, disc 463–464.
34. Alaker M, Wynn GR, Arulampalam T. Virtual reality training in laparoscopic surgery: a systematic review & meta-analysis. Int J Surg 2016;29:85–94.
35. Fransson BA, Chen CY, Noyes JA, et al. Instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality. Vet Surg 2016;45:O5–O13.
36. Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc 2016;9:69–82.
37. Botden SM, Torab F, Buzink SN, et al. The importance of haptic feedback in laparoscopic suturing training and the additive value of virtual reality simulation. Surg Endosc 2008;22:1214–1222.
38. Botden SM, de Hingh IH, Jakimowicz JJ. Meaningful assessment method for laparoscopic suturing training in augmented reality. Surg Endosc 2009;23:2221–2228.
39. Nomura T, Mamada Y, Nakamura Y, et al. Laparoscopic skill improvement after virtual reality simulator training in medical students as assessed by augmented reality simulator. Asian J Endosc Surg 2015;8:408–412.
40. Chipman JG, Schmitz CC. Using objective structured assessment of technical skills to evaluate a basic skills simulation curriculum for first-year surgical residents. J Am Coll Surg 2009;209:364–370.
41. Niitsu H, Hirabayashi N, Yoshimitsu M, et al. Using the objective structured assessment of technical skills (OSATS) global rating scale to evaluate the skills of surgical trainees in the operating room. Surg Today 2013;43:271–275.
42. Hopmans CJ, den Hoed PT, van der Laan L, et al. Assessment of surgery residents’ operative skills in the operating theater using a modified Objective Structured Assessment of Technical Skills (OSATS): A prospective multicenter study. Surgery 2014;156:1078–1088.
43. Hatala R, Cook DA, Brydges R, et al. Cconstructing a validity argument for the objective structured assessment of technical skills (OSATS): a systematic review of validity evidence. Adv Health Sci Educ Theory Pract 2015;20:1149–1175.
44. Kramp KH, van Det MJ, Hoff C, et al. Validity and reliability of global operative assessment of laparoscopic skills (GOALS) in novice trainees performing a laparoscopic cholecystectomy. J Surg Educ 2015;72:351–358.
45. Vassiliou MC, Feldman LS, Andrew CG, et al. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 2005;190:107–113.
46. Bilgic E, Watanabe Y, McKendy K, et al. Reliable assessment of operative performance. Am J Surg 2016;211:426–430.
47. McDougall EM. Validation of surgical simulators. J Endourol 2007;21:244–247.
48. Kennedy KC, Tamburello KR, Hardie RJ. Peri-operative morbidity associated with ovariohysterectomy performed as part of a third-year veterinary surgical-training program. J Vet Med Educ 2011;38:408–413.
49. Derossis AM, Fried GM, Abrahamowicz M, et al. Development of a model for training and evaluation of laparoscopic skills. Am J Surg 1998;175:482–487.
50. Scott DJ, Bergen PC, Rege RV, et al. Laparoscopic training on bench models: better and more cost effective than operating room experience? J Am Coll Surg 2000;191:272–283.
51. Korndorffer JR Jr, Dunne JB, Sierra R, et al. Simulator training for laparoscopic suturing using performance goals translates to the operating room. J Am Coll Surg 2005;201:23–29.
52. Sidhu RS, Vikis E, Cheifetz R, et al. Self-assessment during a 2-day laparoscopic colectomy course: can surgeons judge how well they are learning new skills? Am J Surg 2006;191:677–681.
53. Willis RE, Van Sickle KR. Current status of simulation-based training in graduate medical education. Surg Clin North Am 2015;95:767–779.
54. Clements MB, Morrison KY, Schenkman NS. Evaluation of laparoscopic curricula in American urology residency training: a 5-year update. J Endourol 2016;30:347–353.
55. Nataraja RM, Ade-Ajayi N, Curry JI. Surgical skills training in the laparoscopic era: the use of a helping hand. Pediatr Surg Int 2006;22:1015–1020.
56. Griffon DJ, Cronin P, Kirby B, et al. Evaluation of a hemostasis model for teaching ovariohysterectomy in veterinary surgery. Vet Surg 2000;29:309–316.
57. Buote NJ. Laparoscopic ovariectomy and ovariohysterectomy. In: Fransson BA, Fransson BA, Mayhew PD, eds. Small animal laparoscopy and thoracoscopy. Hoboken, NJ: John Wiley & Sons Inc, 2015;207–216.
58. Bradley P. The history of simulation in medical education and possible future directions. Med Educ 2006;40:254–262.
59. Champault G, Cazacu F, Taffinder N. Serious trocar accidents in laparoscopic surgery: a French survey of 103,852 operations. Surg Laparosc Endosc 1996;6:367–370.
60. Johnson TG, Hooks WB III, Adams A, et al. Safety and efficacy of laparoscopic access in a surgical training program. Surg Laparosc Endosc Percutan Tech 2016;26:17–20.
61. Mayhew PD, Brown DC. Comparison of three techniques for ovarian pedicle hemostasis during laparoscopic-assisted ovariohysterectomy. Vet Surg 2007;36:541–547.
62. Davidson EB, Moll MD, Payton ME. Comparison of laparoscopic ovariohysterectomy and ovariohysterectomy in dogs. Vet Surg 2004;33:62–69.
63. Glass KB, Tarnay CM, Munro MG. Randomized comparison of the effect of manipulation on incisional parameters associated with a pyramidal laparoscopic trocar-cannula system and the EndoTIP cannula. J Am Assoc Gynecol Laparosc 2003;10:412–414.
64. Bhoyrul S, Vierra MA, Nezhat CR, et al. Trocar injuries in laparoscopic surgery. J Am Coll Surg 2001;192:677–683.
65. Jansen FW, Kolkman W, Bakkum EA, et al. Complications of laparoscopy: an inquiry about closed- versus open-entry technique. Am J Obstet Gynecol 2004;190:634–638.
66. Vilos GA, Ternamian A, Dempster J, et al. Laparoscopic entry: a review of techniques, technologies, and complications. J Obstet Gynaecol Can 2007;29:433–447.
67. Ahmad G, Gent D, Henderson D, et al. Laparoscopic entry techniques. Cochrane Database Syst Rev 2015;8:CD006583.
68. Shariati E, Bakhtiari J, Khalaj A, et al. Comparison between two portal laparoscopy and open surgery for ovariectomy in dogs. Vet Res Forum 2014;5:219–223.