American College of Veterinary Radiology and European College of Veterinary Diagnostic Imaging position statement on artificial intelligence

Ryan B. Appleby Ontario Veterinary College, University of Guelph, Guelph, ON, Canada

Search for other papers by Ryan B. Appleby in
Current site
Google Scholar
PubMed
Close
 DVM, DACVR https://orcid.org/0000-0002-5515-8628
,
Matthew Difazio Department of Clinical Sciences, College of Veterinary Medicine, Kansas State University, Manhattan, KS

Search for other papers by Matthew Difazio in
Current site
Google Scholar
PubMed
Close
 DVM, DACVR
,
Nicolette Cassel Department of Clinical Sciences, College of Veterinary Medicine, Kansas State University, Manhattan, KS

Search for other papers by Nicolette Cassel in
Current site
Google Scholar
PubMed
Close
 BVSc, ECVDI
,
Ryan Hennessey CACI International Inc, San Antonio, TX

Search for other papers by Ryan Hennessey in
Current site
Google Scholar
PubMed
Close
 MS
, and
Parminder S. Basran Clinical Sciences, Cornell University, Ithaca, NY

Search for other papers by Parminder S. Basran in
Current site
Google Scholar
PubMed
Close
 PhD https://orcid.org/0000-0002-1573-1549
Open access

Abstract

The American College of Veterinary Radiology (ACVR) and the European College of Veterinary Diagnostic Imaging (ECVDI) recognize the transformative potential of AI in veterinary diagnostic imaging and radiation oncology. This position statement outlines the guiding principles for the ethical development and integration of AI technologies to ensure patient safety and clinical effectiveness.

Artificial intelligence systems must adhere to good machine learning practices, emphasizing transparency, error reporting, and the involvement of clinical experts throughout development. These tools should also include robust mechanisms for secure patient data handling and postimplementation monitoring. The position highlights the critical importance of maintaining a veterinarian in the loop, preferably a board-certified radiologist or radiation oncologist, to interpret AI outputs and safeguard diagnostic quality.

Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety. The ACVR and ECVDI advocate for rigorous peer-reviewed research, unbiased third-party evaluations, and interdisciplinary collaboration to establish evidence-based benchmarks for AI applications. Additionally, the statement calls for enhanced education on AI for veterinary professionals, from foundational training in curricula to continuing education for practitioners.

Veterinarians are encouraged to disclose AI usage to pet owners and provide alternative diagnostic options as needed. Regulatory bodies should establish guidelines to prevent misuse and protect the profession and patients. The ACVR and ECVDI stress the need for a cautious, informed approach to AI adoption, ensuring these technologies augment, rather than compromise, veterinary care.

Abstract

The American College of Veterinary Radiology (ACVR) and the European College of Veterinary Diagnostic Imaging (ECVDI) recognize the transformative potential of AI in veterinary diagnostic imaging and radiation oncology. This position statement outlines the guiding principles for the ethical development and integration of AI technologies to ensure patient safety and clinical effectiveness.

Artificial intelligence systems must adhere to good machine learning practices, emphasizing transparency, error reporting, and the involvement of clinical experts throughout development. These tools should also include robust mechanisms for secure patient data handling and postimplementation monitoring. The position highlights the critical importance of maintaining a veterinarian in the loop, preferably a board-certified radiologist or radiation oncologist, to interpret AI outputs and safeguard diagnostic quality.

Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety. The ACVR and ECVDI advocate for rigorous peer-reviewed research, unbiased third-party evaluations, and interdisciplinary collaboration to establish evidence-based benchmarks for AI applications. Additionally, the statement calls for enhanced education on AI for veterinary professionals, from foundational training in curricula to continuing education for practitioners.

Veterinarians are encouraged to disclose AI usage to pet owners and provide alternative diagnostic options as needed. Regulatory bodies should establish guidelines to prevent misuse and protect the profession and patients. The ACVR and ECVDI stress the need for a cautious, informed approach to AI adoption, ensuring these technologies augment, rather than compromise, veterinary care.

Introduction

Artificial intelligence is an evolving technology poised to transform veterinary medicine, particularly in diagnostic imaging and radiation oncology. Artificial intelligence offers the potential to improve efficiency, accuracy, and consistency in interpreting imaging studies and planning treatments. However, unlike traditional diagnostic tools, AI systems represent a fundamentally new category of technology, requiring specialized education and expertise to ensure their safe and effective use. While the systems are promising, their complexity and rapid development have outpaced the establishment of comprehensive guidelines, standards, and best practices tailored to veterinary applications.

A key challenge lies in the lack of transparency and validation for AI tools currently available for veterinary diagnostic imaging. Many AI systems rely on proprietary methodologies and datasets that are not accessible for scrutiny by end users or regulatory bodies. This opacity makes it difficult to evaluate the clinical performance, reliability, and safety of these tools. Furthermore, while AI has the capacity to automate or assist with tasks such as lesion detection, organ contouring, and report generation, the absence of a qualified veterinary professional in the diagnostic loop may introduce risks of misdiagnosis and compromised patient care.

The American College of Veterinary Radiology (ACVR) and European College of Veterinary Diagnostic Imaging (ECVDI) position seeks to address this critical gap in knowledge and practice by establishing foundational principles for the ethical development, deployment, and oversight of AI in veterinary medicine. By emphasizing the need for transparency, rigorous validation, and the continued involvement of veterinary professionals, this position aims to protect patient safety, ensure high standards of care, and guide the responsible integration of AI into the veterinary field.

Problem Statement

The rapid development of AI in veterinary diagnostic imaging and radiation oncology has outpaced the establishment of standardized guidelines for its ethical use, transparency, and validation, creating significant risks to patient safety, diagnostic accuracy, and veterinary accountability.

Position

The ACVR and ECVDI support the development and use of ethical and transparent AI in veterinary diagnostic imaging and radiation oncology applications. The colleges acknowledge the potential positive transformational power of AI.

To best support veterinary teams and ensure patient safety, AI should be developed in accordance with the guiding principles of good machine learning practice.1 In doing so, AI systems should adhere to the guiding principles of transparency for machine learning–enabled medical devices.2

The ACVR and ECVDI believe that AI systems should always be used with a qualified veterinary professional in the loop. In veterinary diagnostic imaging, board-certified radiologists are best suited to evaluate the output of computer-aided diagnostic tools. The same is true of board-certified radiation oncologists for evaluation of computer-assisted strategies in the planning, delivery, and quality assurance of radiation treatments for animals.

Artificial intelligence systems that do not ensure safe and secure handling of patient data; do not provide transparency of their underlying methodology, training, and testing sets; do not allow postimplementation monitoring as defined by good machine learning practices; and do not allow transparency for machine learning–enabled medical devices1,2 should not be used in veterinary practice. There is currently no commercially available product for diagnostic imaging that meets these standards.

The ACVR and ECVDI support ongoing research and encourage the publication of both internal documentation and external validation of AI tools in high-quality veterinary journals, including that of our colleges, Veterinary Radiology and Ultrasound.

The ACVR and ECVDI urge the need for unbiased, third-party evaluations of AI tools to establish trust and ensure that these technologies meet the highest standards of clinical effectiveness.

Veterinarians should exercise caution when using AI in diagnostic imaging and must understand the limitations of the systems they are using. The legal responsibility of decisions made from any AI system has yet to be determined but is likely to have some degree of responsibility for veterinarians themselves rather than developers of the AI alone.

Background Information

The mission of the ACVR is “to promote excellence in patient care by providing leadership, innovation, and education in veterinary diagnostic imaging and radiation oncology.” Similarly, the “goal of the ECVDI is to advance veterinary diagnostic imaging in Europe and increase the competence of those who practise in this field.” As such, both colleges have an interest in the safe and effective development and use of AI in veterinary diagnostic imaging.

Artificial intelligence is an important, evolving technology for veterinary diagnostic imaging and radiation therapy that has the potential to improve efficiency and accuracy of practice, like the change AI is anticipated to deliver in human healthcare.3,4 Artificial intelligence represents a fundamentally new class of diagnostic tool, unlike those veterinarians have historically utilized, and the methods of evaluation, potential advantages, and potential pitfalls of AI systems are not easily understood without specific education5,6 in these subjects and direct experience with these technologies.

Artificial intelligence systems in veterinary diagnostic imaging have many potential use cases, including image preprocessing, dose reduction, clinical record review and summary, report templating, direct report generation, organ contouring, lesion detection, development of new screening and diagnostic tools, and improving the repeatability and objectivity of radiomics measurements.7 Artificial intelligence technologies used in radiation oncology may also be deployed in planning, delivery, and quality assurance of radiation treatments for animals.8

Many currently available AI tools within the veterinary space are being developed to perform direct image-reading and report generation capabilities, attempting to replicate the expert skills of trained radiologists, or to provide rapid, advance screening of images prior to human review. This broad application is beyond the means of AI to satisfactorily accomplish, in comparison with narrower uses of AI to enhance or supplement human review. Systems attempting to replicate the expertise of radiologists without transparency and research demonstrating their efficacy and effects on clinical outcomes pose a risk to the standard of care in veterinary practice. Due to the inherent nature and limitations of AI systems, it is the opinion of the ACVR and ECVDI that AI tools must always be developed and deployed with a veterinarian in the loop (a radiologist9 or skilled veterinarian) to audit results, ensure patient safety, and maintain standards of practice.

The ACVR and ECVDI recognize the great promise of AI and strongly support and encourage continued development in veterinary diagnostic imaging. There is a particular unmet need for development of AI tools that assist the performance and efficiency of veterinary radiologists and radiation oncologists, to best leverage their domain expertise.

Recommendations

  • Artificial intelligence systems should be developed in accordance with good machine learning practices for medical device development1; of particular importance is transparency,2 error reporting, involvement of clinical experts throughout the engineering process, and separation of romance and marketing claims from scientific validation.

  • Peer-reviewed research on the clinical utility of AI systems is strongly recommended, as it is paramount to engendering trust.

  • The ACVR and ECVDI recommend the establishment of unbiased, third-party evaluations of AI tools to establish trust and ensure that these technologies meet the highest standards of clinical effectiveness.

  • Funding agencies are urged to support evaluation of AI products, and researchers are encouraged to actively engage with domain experts in both AI and diagnostic imaging and radiation oncology to meet the needs of the profession.

  • Veterinary colleges should establish basic AI education within their curricula to best support future graduates, as AI is likely to play a role in the veterinary profession’s future, in diagnostic imaging, and in radiation oncology.

  • Veterinarians in practice, and particularly radiologists, should seek out continuing education to be informed on AI, particularly the validation and evaluation of clinical performance of veterinary diagnostic imaging AI systems.

  • The ACVR and ECVDI recommend both transparency from AI developers and end use education, as a lack of understanding of AI systems could lead to misdiagnosis and misinterpretation.

  • Veterinarians should transparently disclose to owners when AI is being utilized in a diagnostic role,10 be able to discuss the benefits and drawbacks of the selected tools, and have alternative diagnostic plans available should owners choose to decline AI utilization.

  • Veterinary radiologists with domain expertise are encouraged to embody the roles of community ambassadors, to provide continuing education, and to perform an advisory function pertaining to diagnostic imaging AI in the veterinary space.

  • Stakeholders from a variety of groups, including veterinary medical associations, regulatory bodies, and specialist colleges, should establish an independent organization dedicated to creating guidelines for labeling veterinary AI products, like those established by the Association of American Feed Control Officials for labeling food products or by the FDA for certifying software as medical devices.

  • Regulatory bodies such as national, state, provincial, or other veterinary certifying boards are encouraged to establish rules and guidelines outlining acceptable roles of AI in practice, to protect veterinary professionals and patients from potential harms of misuse.

Acknowledgments

This work was written primarily by the authors listed but is the consensus recommendation of the American College of Veterinary Radiology and European College of Veterinary Diagnostic Imaging Artificial Intelligence Education and Development Committee. The committee members listed here contributed through discussion and reviewed the text of the position, future recommendations, and background information. American College of Veterinary Radiology (Diagnostic Imaging/Radiology): Matt Winter, Adrien Hespel, Eli Cohen, Hiro Murakami, Michael Bailey, Diane Wilson, Erin Hennessey, Jennifer Brisson, Seth Wallack, and Andrew Weissman. American College of Veterinary Radiology (Radiation Oncology): David Ruslander. European College of Veterinary Diagnostic Imaging: Alex Smith, Daniel Ivan, Alessia Ebling, Pablo Barge-Carmona, Will Humphreys, and Benedict Amphimaque. Members at large: John Craig, Del Leary, and Tomasso Banzato.

Disclosures

The authors have no specific conflicts of interest to declare. The members of the Artificial Intelligence Education and Development Committee each hold affiliations in the field of veterinary radiology, radiation oncology, or artificial intelligence. Their backgrounds and affiliations may influence their points of view, though do not hold direct influence over the creation of this position.

ChatGPT was used in the brainstorming and proofing stages of this work.

Funding

The authors have nothing to disclose.

References

  • 1.↑

    US FDA, Health Canada, UK Medicines and Healthcare Products Regulatory Agency. Good machine learning practice for medical device development: guiding principles. October 2021. Accessed March 4, 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles

    • Search Google Scholar
    • Export Citation
  • 2.↑

    US FDA, Health Canada, UK Medicines and Healthcare Products Regulatory Agency. Transparency for machine learning-enabled medical devices: guiding principles. June 2024. Accessed March 4, 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/transparency-machine-learning-enabled-medical-devices-guiding-principles

    • Search Google Scholar
    • Export Citation
  • 3.↑

    Tang A, Tam R, Cadrin-ChĂȘnevert A, et al.; Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. Canadian Association of radiologists white paper on artificial intelligence in radiology. Can Assoc Radiol J. 2018;69(2):120-135. doi:10.1016/j.carj.2018.02.002

    • Search Google Scholar
    • Export Citation
  • 4.↑

    Thompson RF, Valdes G, Fuller CD, et al. Artificial intelligence in radiation oncology: a specialty-wide disruptive transformation?Radiother Oncol. 2018;129(3):421-426. doi:10.1016/j.radonc.2018.05.030

    • Search Google Scholar
    • Export Citation
  • 5.↑

    Hedderich DM, Keicher M, Wiestler B, et al. AI for doctors—a course to educate medical professionals in artificial intelligence for medical imaging. Healthcare (Basel). 2021;9(10):1278. doi:10.3390/healthcare9101278

    • Search Google Scholar
    • Export Citation
  • 6.↑

    Perchik JD, Smith AD, Elkassem AA, et al. Artificial intelligence literacy: developing a multi-institutional infrastructure for AI education. Acad Radiol. 2023;30(7):1472-1480. doi:10.1016/j.acra.2022.10.002

    • Search Google Scholar
    • Export Citation
  • 7.↑

    Appleby RB, Basran PS. Artificial intelligence in diagnostic imaging. Adv Small Anim Care. 2024;5(1):67-77. doi:10.1016/j.yasa.2024.06.005

    • Search Google Scholar
    • Export Citation
  • 8.↑

    Leary D, Basran PS. The role of artificial intelligence in veterinary radiation oncology. Vet Radiol Ultrasound. 2022;63(suppl 1):903-912. doi:10.1111/vru.13162

    • Search Google Scholar
    • Export Citation
  • 9.↑

    Scheek D, Rezazade Mehrizi MH, Ranschaert E. Radiologists in the loop: the roles of radiologists in the development of AI applications. Eur Radiol. 2021;31(10):7960-7968. doi:10.1007/s00330-021-07879-w

    • Search Google Scholar
    • Export Citation
  • 10.↑

    Kiseleva A, Kotzinos D, De Hert P. Transparency of AI in healthcare as a multilayered system of accountabilities: between legal requirements and technical limitations. Front Artif Intell. 2022;5(May):879603. doi:10.3389/frai.2022.879603

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 16818 16818 13614
PDF Downloads 1796 1796 184
Advertisement