Abstract
The field of veterinary medicine, like many others, is expected to undergo a significant transformation due to artificial intelligence (AI), although the full extent remains unclear. Artificial intelligence is already becoming prominent throughout daily life (eg, recommending movies, completing text messages, predicting traffic), yet many people do not realize they interact with it regularly. Despite its prevalence, opinions on AI in veterinary medicine range from skepticism to optimism to indifference. However, we are living through a key moment that calls for a balanced perspective, as the way we choose to address AI now will shape the future of the field. Future generations may view us as either overly optimistic, blinded by AI's allure, or overly pessimistic, failing to recognize its potential. By understanding how algorithms function and predictions are made, we can begin to demystify AI, seeing it not as an all-knowing entity but as a powerful tool that will assist veterinary professionals in providing high-level care and progressing in the field. Building awareness allows us to appreciate its strengths and limitations and recognize the ethical dilemmas that may arise. This review aims to provide an accessible overview of the status of AI in veterinary medicine. This review is not intended to be an exhaustive account of AI.
Foundational Overview
Just as veterinarians would not use a new diagnostic tool without understanding its basic principles, the same applies to artificial intelligence (AI). While most veterinarians are not experts in advanced modeling, and should not be expected to be, a basic understanding is essential for critical appraisal. Previously, unfamiliarity with promising new tools has led to blind trust, even when those tools were significantly flawed.1 Conversely, a lack of technical knowledge about AI seems to correlate with greater skepticism in veterinary professionals (VPs).2,3 If AI is to be widely adopted, a foundational level of understanding is required.
Interest in AI has increased rapidly in the past 2 years,4 correlating with the public release of OpenAI's ChatGPT model in late 2022.5 However, ChatGPT and other large language models are just one aspect of the broad field of AI, which has been developing for decades. Dating the exact birth of AI can be challenging, but the 1940s and 1950s are widely considered formative years, leading to its recognition as an academic discipline in 1956.6,7 In 1943, McCulloch and Pitts8 proposed the theory behind neural networks, suggesting the brain could be represented as a network of switches where neurons fire or remain inactive based on inputs. This implied that complex decision-making could be modeled using interconnected units mimicking neurons, processing inputs, and passing outputs through multiple hidden layers, sometimes millions in deep neural networks (Figure 1). Neural networks are just one of many different “architectures” or models available. Such models fall under the branch of machine learning (ML), a subfield of AI defined by systems learning from experience based on errors, rather than relying on rigid, predefined instructions. This self-learning capability is powerful, as understanding the exact relationship between inputs and outputs is not necessary. However, this also means we sometimes will not know how a model arrives at a decision, making it a “black box,” which has unique implications in medicine.9,10
Illustrative examples of neural network concepts. A—A simple neural network schematic for predicting the risk of kidney disease in dogs, using a sample dataset with patient attributes such as age, urine protein-to-creatinine ratio (UPCR), and weight. This example is illustrative and not based on a real model. B—A closer view of a single node (neuron) in the network, highlighting how inputs are weighted and summed to calculate the node's output. C—A comparative schematic illustrating the structural difference between a simple neural network with 1 hidden layer and a deep neural network with 3 hidden layers. Created with Draw.io.
Citation: American Journal of Veterinary Research 86, S1; 10.2460/ajvr.24.09.0275
Recognizing that AI and ML build on long-established statistical methods helps set realistic expectations for their potential and limitations. For example, neural network models function as layers of interconnected statistical models, with each node processing inputs and passing information to the next layer. Each node is conceptually similar to a specialized form of logistic regression,11 a longstanding tool in statistics and veterinary research. Like all models, they will not perfectly predict reality, and their accuracy heavily depends on the quality of training data. Furthermore, the fact that a model is labeled as “AI” does not automatically make it superior to traditional statistical methods.12
While ML focuses on systems learning from data, AI encompasses a broader scope, including reasoning, problem-solving, and natural language processing, as seen in generative AI models like ChatGPT.13 Generative AI models are impressive and may appear convincingly sentient, but they are still considered narrow intelligence (NI), as they are designed to perform specific tasks, such as predicting the next word in a sentence. These NI algorithms can easily fail when asked to perform even simple tasks outside their intended scope, like counting to 10.14 In contrast, artificial general intelligence would possess the ability to understand, learn, and apply knowledge across a wide range of domains, equal to or beyond human intelligence.15 Artificial general intelligence remains theoretical, with ongoing debate about when or if it will be achieved, yet many people may not recognize the distinction between NI.
Current Applications
Artificial intelligence has numerous potential and realized applications in veterinary medicine, ranging from precision medicine and predictive models for disease outbreaks to appointment scheduling and report drafting. This section introduces a selection of existing and emerging applications of AI across the profession (Figure 2).
Key applications of artificial intelligence (AI) in veterinary medicine. A visual summary of AI technologies and tools grouped by theme. AMR = Antimicrobial resistance. Created with Draw.io.
Citation: American Journal of Veterinary Research 86, S1; 10.2460/ajvr.24.09.0275
Workflow automation
Workflow automation is perhaps the most readily adopted category of AI tools, streamlining routine tasks and enhancing daily operations. By reducing manual input and improving efficiency, these tools free professionals to focus on patient care and client interactions. Examples include automated scribes that document client interactions and generate SOAP notes,16 appointment scheduling systems that predict client demand, and inventory management software that dynamically forecasts supply needs. Generative AI can assist with drafting standard operating procedures, composing emails, visualizing clinic data, and even managing billing, payments, and compliance tracking.17 Artificial intelligence can also automate client interactions, such as sending reminder emails or postcare follow-ups, and is advancing to handle routine phone calls, like scheduling appointments, with convincingly human-like conversation.18
In clinic management, AI models are able to predict equipment maintenance needs, automatically order parts, and schedule repairs to minimize downtime.19,20 In addition, AI-driven telemedicine tools triage cases by analyzing client-submitted photos and symptoms, prioritizing critical cases for virtual consultations.21 By offloading repetitive tasks to algorithms, VP can better concentrate on complex, high-value aspects of patient care.
Diagnostics and clinical support
Artificial intelligence is rapidly advancing diagnostics and clinical support, improving the speed and accuracy of diagnoses and offering new ways to manage patient care both in-clinic and remotely. Leveraging large datasets, AI assists veterinarians in making informed clinical decisions by drawing from literature, analogous cases, and medical records to suggest diagnoses based on case-specific information. This diverse collection of information improves accuracy and time-to-diagnosis for diseases with low prevalence or low index of suspicion.22–25 Artificial intelligence–enhanced diagnostic tools also working toward improving sample analysis in clinics and in laboratories. For example, in-clinic cellular analyzers26 and rapid tests for antimicrobial resistance27,28 could detect abnormalities and resistant pathogens in near real time, enabling early diagnoses and timely, targeted treatments.
Radiograph and imaging interpretation has been a major focus for medical AI.29–31 Machine learning algorithms are becoming proficient at detecting certain abnormalities in scans with high accuracy.32,33 While they show promise, their performance remains a complement to, rather than a replacement for, the expertise of specialized radiologists. In human medicine, the field of radiomics uses ML to extract quantitative information from scans, such as pixel intensity and texture, providing a previously inaccessible source of information.34 Using this information, ML models can now detect cardiovascular conditions from chest x-rays,35,36 predict diabetes from retinal scans,37 and determine age and sex from electrocardiograms38 in humans. For unremarkable cases, AI assistance can shorten reading times for radiologists.39 Many of these algorithms have yet to be tested or introduced in veterinary medicine but could conceivably follow suit.
The “Internet of Things” (IoT) integrates physical devices, like sensors, into networks that exchange data in real time. In veterinary medicine, IoT monitors patient health remotely, providing continuous updates on vital signs, activity levels, and production metrics. Combined with AI models, these data are used to detect early signs of disease and track recovery progress. For example, in dairy cattle, odor sensors can predict ketosis40; temperature, motion, and sound sensors can predict mastitis and foot-and-mouth disease41; and in companion animals, smart collars predict overall health scores.42 Integration of IoT data with AI improves access to care for rural clients through remote monitoring and telemedicine, delivering enhanced care from a distance and providing valuable data for in-clinic visits.21,43
The AI-driven models could soon develop patient-specific protocols, using factors like environment, genetics, and medical history to predict individual responses and tailor care plans accordingly.44,45 This personalized, data-driven approach could minimize resistance, reduce side effects, and improve overall outcomes.
Predictive analytics and forecasting
Machine learning for predictive modeling has long been utilized in disease surveillance and epidemiology, employing methods such as regression and time series forecasting to predict the timing, location, risk factors, and scale of outbreaks. Advances in computational power and the availability of big data, characterized not only by volume, but also by variety, veracity, velocity, and value,46 have improved prediction accuracy, enhanced real-time monitoring, and automated disease surveillance systems. Modern platforms use fully automated ML pipelines to detect emerging animal disease threats earlier than traditional methods.47,48
Models leveraging diverse data sources, including images, text, genomic sequences, and audio recordings, are enhancing predictive capabilities, while multimodal models combine these data types for deeper insights.49 For example, Clostridium perfringens infections in chickens can be predicted using audio data,50 bovine tuberculosis can be predicted with infrared scans and lesion presence,51 and avian influenza can be predicted with farm data and environmental variables.52 In honeybees, audio, temperature, and environmental data are used to predict disease states, since physical exams are challenging.53,54
Predictive models are also making the allocation of surveillance resources more efficient by focusing on high-risk areas, reducing unnecessary sampling, and enhancing early detection.55 By analyzing viral genetic code, algorithms predict vulnerable host species, allowing for proactive biosecurity measures.56 In labs, samples can be prioritized by predicted risk for faster, targeted diagnostics.57 At the patient level, models are better predicting antimicrobial-resistant infections,58 genetic conditions,59 and surgery complications.60
Research and development
Intelligent tools are already integrated throughout research, streamlining all stages of the scientific method. Knowledge synthesis tools can quickly conduct comprehensive literature searches, summarizing key findings and generating detailed reviews, including counts of studies supporting or refuting a claim.61–63 Tools like Google's “Notebook LM” enable interaction with libraries of uploaded research articles, allowing users to ask content-related questions and instantly generate human-like podcast summaries. This ability to efficiently search and summarize available literature will be increasingly important moving forward as the number of published articles per day continues to increase, while reading comprehension remains stagnant.64 For young academics, these tools can substantially cut down on the time required to familiarize themselves with the landscape of existing literature.
Natural language processing algorithms hold great potential for extracting meaning from free-text digital patient records. Classified as “messy data,” these records have historically been underutilized due to the time and complexity involved in their analysis. However, ML models can efficiently process thousands of free-text entries, uncovering valuable insights. These data sources often contain nuanced information that is crucial for understanding patient history, identifying trends in disease progression, and improving diagnostic accuracy.65–67
Advances in AI are unlocking new possibilities in biomedical research. Algorithms that predict 3-D protein structures from amino acid sequences are deepening our understanding of protein function.68 Automated screening of molecular databases and predictive models are accelerating the discovery of new pharmaceuticals, including antimicrobials.45 The AI-driven platforms, such as high-throughput screening systems, are revolutionizing the fields of genomics, molecular systems engineering, and bionanomaterials discovery.69 In addition, AI is enhancing clinical trial design and management, reducing costs and expediting drug delivery to the market.69
Effective communication of research findings is a common weak point for researchers. However, generative AI models can enhance writing quality and efficiency, while still meeting standards of high-impact journals.70,71 While not advisable to use generative AI to write research papers in their entirety, selective applications such as overcoming “blank page anxiety,” improving grammar, drafting outlines, or tailoring content can save academics significant time in the writing process.72
Education
As the veterinary curriculum continues to expand and new technologies emerge, education for both new and experienced veterinarians remains essential to maintaining high standards of care. Artificial intelligence–powered education platforms like Khanmigo, developed by Khan Academy, are playing a growing role in veterinary education by providing innovative tools that enhance learning through personalized, on-demand support for students as well as curriculum design and lesson planning for educators.73 Intelligent learning management systems can guide students by adjusting the pace and sequence of lessons based on their progress, while also providing educators with insights into student performance.74
The AI-driven chatbots, for example, can simulate clinical scenarios, answer student questions, and assist in learning by summarizing key concepts from lecture notes. Chatbots can also help students practice client communication by role-playing clients of differing personality types and probing for follow-up questions to explain complex medical information in simplified terms.
Adaptive testing techniques dynamically adjust exam questions to individual students, resulting in more accurate assessments of their learning outcomes.75 Generative AI can enhance these systems by expanding question banks and ensuring the material remains up-to-date. Meanwhile, virtual and augmented reality tools are transforming surgical training by overlaying critical information onto real-world procedures and providing real-time feedback during complex operations through the use of AI.76
Strengths and Limitations
Strengths
One of AI's most significant strengths is its ability to perform tasks without fatigue, maintaining consistent accuracy and efficiency over time, unlike human VPs, whose decision-making ability can wane over their shift.77 These systems excel at high-volume tasks such as image recognition, where AI can repeatedly and methodically analyze diagnostic images or data sources. The ability of AI to detect patterns, correlations, and anomalies in large datasets allows for earlier disease detection and decision support. By automating repetitive, data-intensive tasks, AI can relieve less fulfilling yet high-frequency tasks, possibly alleviating decision fatigue and addressing burnout.
Artificial intelligence could also revolutionize access to veterinary care, especially in remote or underserved areas. By integrating with IoT devices, AI can efficiently analyze and interpret large amounts of data collected through continuous, real-time monitoring of animal health, combining multiple data streams (eg, sensor data, behavioral tracking) to detect subtle changes.21,42,43 This predictive capability allows for timely interventions, improving outcomes while supporting global efforts to provide equitable access to veterinary care. Artificial intelligence systems are particularly adept at scaling data processing, making them ideal for large-scale livestock management, disease surveillance, and conservation efforts in rural areas. Additionally, AI's ability to process data locally through edge computing, where data are processed near its source rather than in centralized servers, reduces latency, making real-time decision-making feasible even in areas with limited connectivity.
The integration of AI into veterinary medicine not only advances technology but also fosters collaboration across disciplines. The development and application of AI require cooperation between computer scientists, data engineers, biomedical researchers, and clinicians. This convergence of expertise creates a collaborative environment that encourages the exchange of ideas and problem-solving across fields. As AI systems evolve, interdisciplinary efforts will become increasingly crucial, forging a shared goal of improving animal health and welfare.
Furthermore, the adaptability of AI methods will continue to drive innovation as global data volumes reach unprecedented scales. By 2025, the global data volume is expected to reach 175 zettabytes,78 a scale that traditional methods cannot effectively manage. The ability of AI to process and analyze both structured and unstructured data will be essential in extracting meaningful insights.
Limitations
A primary challenge of AI is its dependence on high-quality data for training. The adage “garbage in equals garbage out” highlights that flawed input data lead to unreliable outputs. In veterinary medicine, data inconsistencies arise from varying record-keeping practices among professionals.79 Setting standards for recording high-quality data is crucial, as many AI models require large amounts of accurate information to function effectively. Furthermore, finding ground-truth data can be a limiting factor in validating model predictions, often hindered by incomplete records or loss to follow-up. However, veterinary datasets are often limited in size and diversity compared to those in human medicine. Access to large, comprehensive datasets may not always be feasible, and those capable of developing AI models may lack access to necessary data, and vice versa. Furthermore, training models on data from a single source can perpetuate existing biases,80 especially if skewed toward certain breeds or species. To create the most generalizable models, multiple sources should be combined, although sharing health data comes with its own challenges.81,82
Additionally, AI models require constant sources of new data to stay relevant, meaning they need regular updates with fresh information to maintain accuracy. This involves incorporating new cases, emerging health trends, and advancements in diagnostic methods to ensure the models remain applicable and aligned with current practices in veterinary medicine. However, “big data” does not automatically equate to “good data.” Without careful curation and validation, large datasets may perpetuate inaccuracies and biases, undermining the reliability of AI tools. Artificial intelligence models can inadvertently reflect and amplify biases present in their training data, which may originate from 3 main sources (adapted from DeCamp and Lindvall83):
- 1. Biased datasets: overrepresentation of certain populations or conditions can lead to skewed models. For example, data might overrepresent animals from clients more likely to seek and be able to afford veterinary care, excluding segments of the population.
- 2. Biased processing: flawed assumptions during data processing and model development can introduce bias. Developers could unintentionally embed their own biases or overlook critical variables.
- 3. Biased outputs and uses: applying AI models inconsistently between patients can create disparities in care. Using AI tools for some patients but not others can lead to unequal treatment recommendations and outcomes.
Artificial intelligence models trained on general datasets (eg, Wikipedia or social media), may incorporate inaccuracies or misinformation, as not all information made public on the internet is factual. Large language models do not inherently distinguish between correct and incorrect information, which can result in false outputs. These data could have even been accurate previously but have since become outdated. For example, models trained on data before 2020 would have no information on a vaccine for COVID-19. Understanding the limitations of a model's training data and keeping a human in the loop for critical decisions is essential.
A distinct challenge in veterinary medicine is the diversity of species, which limits the applicability of AI models across different animal types. This necessitates the development of species-specific models, as variations in physiology affect disease presentation, diagnostics, and treatment. Creating tailored AI systems for each species requires specialized data and algorithms, making it both resource intensive and time consuming.
Integrating AI tools into veterinary workflows can be challenging. Practices may struggle to adapt to new technologies, and staff may need additional training to use them effectively and explain them to clients. This integration process could temporarily disrupt workflow efficiency, leading to reluctance in long-term adoption. In addition, the financial investment required to implement AI technologies can be prohibitive, especially for smaller clinics.2 Beyond the cost of the technology itself, acquiring the necessary data to train clinic-specific models poses an additional expense. While corporate clinics can pool data across multiple locations, smaller private practices may not be able to meet the data needs to train or keep models up to date.84
Overreliance on AI presents potential challenges to the progression of veterinary medicine. Excessive dependence could hinder the development of new insights and approaches if we lean too heavily on preexisting models rather than exploring novel methods. Furthermore, without continuous input from experienced professionals, the generation of new, high-quality data could slow, limiting data pools for further advancements. Insights from human medicine indicate that multidisciplinary teams are essential for the successful deployment of AI in healthcare.84–86 Finally, while AI can streamline complex tasks, it might encourage a shallower understanding of underlying principles if used as a substitute for critical thinking and clinical judgment, which are essential to the growth of the field.
Legal and Ethical Considerations
The integration of AI into veterinary medicine introduces complex ethical challenges that must be addressed responsibly. Trustworthy AI must be lawful, ethical, and robust.87 While AI is neither inherently ethical nor unethical, its morality depends on its implementation. Ensuring that AI systems adhere to the principle of primum non nocere (first, do no harm) requires an understanding of their accuracy, potential biases, and limitations. Without transparency in how AI operates and makes decisions, assessing risks and ethical appropriateness becomes difficult.87
A major ethical concern is the black-box nature of many models, where the underlying decision-making process cannot be fully traced, and could undermine clinical trust and the veterinarian-client relationship. Without clear explanations for diagnoses or treatment recommendations, informed consent and the ability to explain treatment options are compromised. This lack of interpretability leads to ethical dilemmas, especially when the professional judgment of the veterinarian and the output of the model do not align. In such cases, practitioners may face pressure to follow AI-driven recommendations, raising concerns about the erosion of clinical autonomy and the potential for AI to override nuanced, case-specific decision-making. In addition, clients have the right to be informed if AI is being used in their pet's care, as transparency is essential for maintaining trust and ensuring they feel comfortable with the technology's role in treatment decisions.88
Data privacy and security are critical ethical issues in medical AI.87 Unsettled questions about data ownership raise concerns about misuse, unauthorized sharing, or commercial exploitation of sensitive health information. Veterinary professionals must navigate these issues while ensuring compliance with privacy regulations and ethical standards, particularly when data are used for research or shared across platforms.
Liability is another significant challenge. When an inaccurate diagnosis is provided, it is unclear who bears responsibility, the veterinarian or the developer.87 Traditionally, veterinarians hold accountability through the Veterinary-Client-Patient Relationship, but AI introduces complexities in assigning responsibility.89 The rapid evolution of this technology may outpace the formulation of guidelines and standards, making it challenging for VPs to keep up with best practices.
The potential job displacement from AI adoption introduces important ethical and economic concerns. While AI tools are designed to assist rather than replace professionals, automating diagnostics and routine tasks could reduce the need for certain roles within clinics, a fear shared by a sizeable number of VPs.2 However, we are more likely to see a shift in roles and responsibilities rather than outright replacement. Although it is improbable that highly skilled positions, such as veterinarians or VPs, would ever be fully replaced by AI, balancing AI-driven efficiency with its impact on veterinary staff requires careful consideration.
Ethical concerns in veterinary research involving AI include transparency, intellectual property, data ownership, and liability. Researchers should credit the tools they use, although concerns about “AI shaming” may complicate this practice.90 Attribution becomes complex when AI-generated insights blur the lines of intellectual contribution. Liability is another key issue: when AI produces flawed results, accountability between developers and researchers becomes unclear. Federal lawmakers and licensing bodies responsible for AI tools should collaborate with VPs to develop guidelines for the responsible use of AI, addressing issues like bias, data transparency, and proper attribution. Veterinary professionals can play a crucial role by advocating for the inclusion of AI ethics in training, ensuring the responsible deployment of these technologies in practice. Before employing a new AI tool, veterinary professionals should critically evaluate its purpose, reliability, ethical implications, and relevance to their practice to determine its suitability for use (Figure 3).
Key considerations for adopting AI tools in veterinary medicine. Created with Draw.io.
Citation: American Journal of Veterinary Research 86, S1; 10.2460/ajvr.24.09.0275
Future Outlook
We are living through an exciting revolution in the veterinary field. As outlined by the Gartner hype cycle, new technologies often experience a surge of excitement and overuse, followed by a period of disillusionment as limitations become apparent. Over time, these technologies find their place where they deliver real value.91 Since AI encompasses a wide range of tools, each is at a different stage in this cycle. Many AI applications in veterinary medicine have yet to reach the “Plateau of Productivity,” and it remains uncertain which will prove most useful in the long run.
As AI continues to evolve, it is important to approach its adoption with both enthusiasm and caution. While AI will undoubtedly reshape aspects of veterinary care, it cannot replace the deep, meaningful interactions between VPs, patients, and clients. Rather, AI should be viewed as a tool to enhance practice, empowering VPs rather than replacing them. For now, combining human expertise with computer intelligence seems to offer the best results.33 Those who effectively adopt AI will likely outpace those who do not.
Artificial intelligence is not a passing trend. Its integration into veterinary medicine is inevitable, but the success of these tools will depend on how they are adopted and the guidelines established to govern their use. Ultimately, AI should serve to enhance care, freeing veterinarians to focus on the human side of their profession and providing the highest standards of care.
Acknowledgments
I extend my sincere thanks to Drs. Daniel Gillis, Zvonimir Poljak, Olaf Berke, Deborah Stacey, and the late Dr. Theresa Bernardo for their guidance and support. Their expertise and insights have profoundly shaped my understanding of this topic. While this article is my own, it stands on a foundation built by their significant contributions.
Disclosures
The following AI-powered tools were used in the preparation of this manuscript: ChatGPT-4 for improving language and readability; Notion.AI for note-taking; Semantic Scholar for literature searches; Connected Papers for research exploration; Elicit for knowledge synthesis; and Microsoft Word's Editor for spelling and grammar. All ideas, interpretations, and conclusions are my own.
Funding
The author has nothing to disclose.
ORCID
K. Sobkowich https://orcid.org/0000-0001-5702-9427
References
- 1.↑
Glasgow B, Reys BJ. The authority of the calculator in the minds of college students. Sch Sci Math. 1998;98(7):383–388. doi:10.1111/j.1949-8594.1998.tb17309.x
- 2.↑
AI in veterinary medicine: the next paradigm shift. Digitail. 2024. Accessed September 10, 2024. https://digitail.com/blog/artificial-intelligence-in-veterinary-medicine-the-next-paradigm-shift/
- 3.↑
Novozhilova E, Mays K, Paik S, Katz JE. More capable, less benevolent: trust perceptions of AI systems across societal contexts. Mach Learn Knowl Extr. 2024;6(1):342–366. doi:10.3390/make6010017
- 4.↑
Google trends: artificial intelligence. Google. Accessed September 10, 2024. https://trends.google.com/trends/explore?date=today%205-y&q=%2Fm%2F0mkz&hl=en
- 5.↑
Wu T, He S, Liu J, et al. A brief overview of ChatGPT: the history, status quo and potential future development. IEEE/CAA J Automat Sinica. 2023;10(5):1122–1136. doi:10.1109/JAS.2023.123618
- 6.↑
Muthukrishnan N, Maleki F, Ovens K, Reinhold C, Forghani B, Forghani R. Brief history of artificial intelligence. Neuroimaging Clin N Am. 2020;30(4):393–399. doi:10.1016/j.nic.2020.07.004
- 7.↑
McCarthy J, Minsky ML, Rochester N, Shannon CE. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Dartmouth College; 1955.
- 8.↑
McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115–133. doi:10.1007/BF02478259
- 9.↑
Price WN. Big data and black-box medical algorithms. Sci Transl Med. 2018;10(471):eaao5333. doi:10.1126/scitranslmed.aao5333
- 10.↑
Xu H, Shuttleworth KMJ. Medical artificial intelligence and the black box problem: a view based on the ethical principle of “do no harm”. Intel Med. 2024;4(1):52–57. doi:10.1016/j.imed.2023.08.001
- 11.↑
Rattan P, Penrice DD, Simonetto DA. Artificial intelligence and machine learning: what you always wanted to know but were afraid to ask. Gastro Hep Adv. 2022;1(1):70–78. doi:10.1016/j.gastha.2021.11.001
- 12.↑
Rajula HSR, Verlato G, Manchia M, Antonucci N, Fanos V. Comparison of conventional statistical methods with machine learning in medicine: diagnosis, drug development, and treatment. Medicina (Kaunas). 2020;56(9):455. doi:10.3390/medicina56090455
- 13.↑
ChatGPT (large language model). Dec 30 version. OpenAI. 2024. Accessed December 29, 2024. https://chat.openai.com/
- 14.↑
Rane S, Ku A, Baldridge JM, Tenney I, Griffiths TL, Kim B. Can generative multimodal models count to ten? OpenReview.net. 2024. Accessed September 24, 2024. https://openreview.net/pdf?id=ZdQfk4BN46
- 15.↑
Goertzel B. Artificial general intelligence: concept, state of the art, and future prospects. J Artific Gen Intel. 2014;5(1):1–48. doi:10.2478/jagi-2014-0001
- 16.↑
Krishna K, Khosla S, Bigham JP, Lipton ZC. Generating SOAP notes from doctor-patient conversations using modular summarization techniques. ACL-IJCNLP. May 4, 2020. Accessed September 25, 2024. https://aclanthology.org/2021.acl-long.384/
- 17.↑
Chu CP. ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research. Front Vet Sci. 2024;11:1395934. doi:10.3389/fvets.2024.1395934
- 18.↑
O'Neal AL. Is Google Duplex Too Human?: Exploring User Perceptions of Opaque Conversational Agents. Thesis. The University of Texas at Austin; 2018. Accessed September 24, 2024. https://repositories.lib.utexas.edu/items/71ac66eb-be57-4d55-8ed7-2be72f79da0c
- 19.↑
Philips Editorial Team. 10 real-world examples of AI in healthcare. Philips. November 24, 2022. Accessed December 29, 2024. https://www.philips.com/a-w/about/news/archive/features/2022/20221124-10-real-world-examples-of-ai-in-healthcare.html
- 20.↑
Beyond downtime: redefining predictive medical equipment maintenance. GE Healthcare. October 9, 2024. Accessed December 29, 2024. https://www.gehealthcare.com/insights/article/beyond-downtime-redefining-predictive-medical-equipment-maintenance?srsltid=AfmBOoqk9sbSO6u29ad0zGm4y-izO-XNdUHamfaY–CdNXXzsPn_ZR4N&utm_source=chatgpt.com
- 21.↑
Cushing M. What is telemedicine, telehealth, and teletriage. Vet Clin North Am Small Anim Pract. 2022;52(5):1069–1080. doi:10.1016/j.cvsm.2022.06.004
- 22.↑
Zhang L, Guo W, Lv C, et al. Advancements in artificial intelligence technology for improving animal welfare: current applications and research progress. Anim Res One Health. 2024;2(1):93–109. doi:10.1002/aro2.44
- 23.
Umapathy VR, Rajinikanth B S, Samuel Raj RD, et al. Perspective of artificial intelligence in disease diagnosis: a review of current and future endeavours in the medical field. Cureus. 2023;15(9):e45684,. doi:10.7759/cureus.45684
- 24.
Burti S, Banzato T, Coghlan S, Wodzinski M, Bendazzoli M, Zotti A. Artificial intelligence in veterinary diagnostic imaging: perspectives and limitations. Res Vet Sci. 2024;175:105317. doi:10.1016/j.rvsc.2024.105317
- 25.↑
Reagan KL, Reagan BA, Gilor C. Machine learning algorithm as a diagnostic tool for hypoadrenocorticism in dogs. Domest Anim Endocrinol. 2020;72:106396. doi:10.1016/j.domaniend.2019.106396
- 26.↑
IDEXX inVue Dx Cellular Analyzer. IDEXX Laboratories Inc. 2024. Accessed September 24, 2024. https://www.idexx.com/en/veterinary/analyzers/invue-dx-analyzer/
- 27.↑
Ali T, Ahmed S, Aslam M. Artificial intelligence for antimicrobial resistance prediction: challenges and opportunities towards practical implementation. Antibiotics. 2023;12(3):523. doi:10.3390/antibiotics12030523
- 28.↑
Zagajewski A, Turner P, Feehily C, et al. Deep learning and single-cell phenotyping for rapid antimicrobial susceptibility detection in Escherichia coli. Commun Biol. 2023;6(1):1164. doi:10.1038/s42003-023-05524-4
- 29.↑
Pinto-Coelho L. How artificial intelligence is shaping medical imaging technology: a survey of innovations and applications. Bioengineering. 2023;10(12):1435. doi:10.3390/bioengineering10121435
- 30.
Bouhali O, Bensmail H, Sheharyar A, David F, Johnson JP. A review of radiomics and artificial intelligence and their application in veterinary diagnostic imaging. Vet Sci. 2022;9(11):620. doi:10.3390/vetsci9110620
- 31.↑
Appleby RB, Basran PS. Artificial intelligence in veterinary medicine. J Am Vet Med Assoc. 2022;260(8):819–824. doi:10.2460/javma.22.03.0093
- 32.↑
Shelmerdine SC, Martin H, Shirodkar K, Shamshuddin S, Weir-McCall JR. Can artificial intelligence pass the Fellowship of the Royal College of Radiologists examination? Multi-reader diagnostic accuracy study. BMJ. 2022;379:e072826. doi:10.1136/bmj-2022-072826
- 33.↑
Cacciamani GE, Sanford DI, Chu TN, et al. Is artificial intelligence replacing our radiology stars? Not yet! Eur Urol Open Sci. 2023;48:14–16. doi:10.1016/j.euros.2022.09.024
- 34.↑
van Timmeren JE, Cester D, Tanadini-Lang S, Alkadhi H, Baessler B. Radiomics in medical imaging—“how-to” guide and critical reflection. Insights Imaging. 2020;11(1):91. doi:10.1186/s13244-020-00887-2
- 35.↑
El Omary S, Lahrache S, El Ouazzani R. Detecting heart failure from chest x-ray images using deep learning algorithms. In: 2021 3rd IEEE Middle East and North Africa COMMunications Conference (MENACOMM). IEEE; 2021:13–18. doi:10.1109/MENACOMM50742.2021.9678291
- 36.↑
Farina JM, Pereyra M, Mahmoud AK, et al. Artificial intelligence-based prediction of cardiovascular diseases from chest radiography. J Imaging. 2023;9(11):236. doi:10.3390/jimaging9110236
- 37.↑
Ragab M, Al-Ghamdi ASA, Fakieh B, Choudhry H, Mansour RF, Koundal D. Prediction of diabetes through retinal images using deep neural network. Comput Intell Neurosci. 2022;2022:1–6. doi:10.1155/2022/7887908
- 38.↑
Attia ZI, Friedman PA, Noseworthy PA, et al. Age and sex estimation using artificial intelligence from standard 12-lead ECGs. Circ Arrhythm Electrophysiol. 2019;12(9):e007284. doi:10.1161/CIRCEP.119.007284
- 39.↑
Shin HJ, Han K, Ryu L, Kim EK. The impact of artificial intelligence on the reading times of radiologists for chest radiographs. NPJ Digit Med. 2023;6(1):82. doi:10.1038/s41746-023-00829-4
- 40.↑
Zhang E, Wang F, Yin C, et al. Application of an electronic nose for the diagnosis of ketosis in dairy cows. Food Biosci. 2024;60:104355. doi:10.1016/j.fbio.2024.104355
- 41.↑
Vyas S, Shukla V, Doshi N. FMD and mastitis disease detection in cows using internet of things (IOT). Proc Comput Sci. 2019;160:728–733. doi:10.1016/j.procs.2019.11.019
- 42.↑
Kim SC, Kim S. Development of a dog health score using an artificial intelligence disease prediction algorithm based on multifaceted data. Animals. 2024;14(2):256. doi:10.3390/ani14020256
- 43.↑
Hassan M, Abdulkarim A, Seida A. Veterinary telemedicine: a new era for animal welfare. Open Vet J. 2024;14(4):952. doi:10.5455/OVJ.2024.v14.i4.2
- 44.↑
Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86–93. doi:10.1111/cts.12884
- 45.↑
Blanco-González A, Cabezón A, Seco-González A, et al. The role of AI in drug discovery: challenges, opportunities, and strategies. Pharmaceuticals. 2023;16(6):891. doi:10.3390/ph16060891
- 46.↑
Ishwarappa AJ. A brief introduction on big data 5Vs characteristics and Hadoop technology. Procedia Comput Sci. 2015;48:319–324. doi:10.1016/j.procs.2015.04.188
- 47.↑
Ur Rehman S, Shafqat F, Niaz K. Recent artificial intelligence methods and coronaviruses. In: Application of Natural Products in SARS-CoV-2. Elsevier; 2023:353–380. doi:10.1016/B978-0-323-95047-3.00009-5
- 48.↑
Valentin S, Arsevska E, Rabatel J, et al. PADI-web 3.0: a new framework for extracting and disseminating fine-grained information from the news for animal disease surveillance. One Health. 2021;13:100357. doi:10.1016/j.onehlt.2021.100357
- 49.↑
Kline A, Wang H, Li Y, et al. Multimodal machine learning in precision health: a scoping review. NPJ Digit Med. 2022;5(1):171. doi:10.1038/s41746-022-00712-8
- 50.↑
Sadeghi M, Banakar A, Khazaee M, Soleimani M. An intelligent procedure for the detection and classification of chickens infected by Clostridium perfringens based on their vocalization. Rev Bras Cienc Avic. 2015;17(4):537–544. doi:10.1590/1516-635X1704537-544
- 51.↑
Denholm SJ, Brand W, Mitchell AP, et al. Predicting bovine tuberculosis status of dairy cows from mid-infrared spectral data of milk using deep learning. J Dairy Sci. 2020;103(10):9355–9367. doi:10.3168/jds.2020-18328
- 52.↑
Yoo D, Song Y, Choi D, Lim J, Lee K, Kang T. Machine learning-driven dynamic risk prediction for highly pathogenic avian influenza at poultry farms in Republic of Korea: daily risk estimation for individual premises. Transbound Emerg Dis. 2022;69(5):2667–2681. doi:10.1111/tbed.14419
- 53.↑
Chen SH, Wang JC, Lin HJ, et al. A machine learning-based multiclass classification model for bee colony anomaly identification using an IoT-based audio monitoring system with an edge computing framework. Expert Syst Appl. 2024;255:124898. doi:10.1016/j.eswa.2024.124898
- 54.↑
Robustillo MC, Pérez CJ, Parra MI. Predicting internal conditions of beehives using precision beekeeping. Biosyst Eng. 2022;221:19–29. doi:10.1016/j.biosystemseng.2022.06.006
- 55.↑
Brownstein JS, Rader B, Astley CM, Tian H. Advances in artificial intelligence for infectious-disease surveillance. N Engl J Med. 2023;388(17):1597–1607. doi:10.1056/NEJMra2119215
- 56.↑
Pandit PS, Anthony SJ, Goldstein T, et al. Predicting the potential for zoonotic transmission and host associations for novel viruses. Commun Biol. 2022;5(1):844. doi:10.1038/s42003-022-03797-9
- 57.↑
Ofek E, Haj R, Molchanov Y, et al. High-confidence AI-based biomarker profiling for H&E slides to optimize pathology workflow in lung cancer. J Clin Oncol. 2023;41(16_suppl):e21207doi:10.1200/JCO.2023.41.16_suppl.e21207
- 58.↑
Kherabi Y, Thy M, Bouzid D, Antcliffe DB, Rawson TM, Peiffer-Smadja N. Machine learning to predict antimicrobial resistance: future applications in clinical practice? Infect Dis Now. 2024;54(3):104864. doi:10.1016/j.idnow.2024.104864
- 59.↑
Baker LA, Momen M, Chan K, et al. Bayesian and machine learning models for genomic prediction of anterior cruciate ligament rupture in the canine model. G3 (Bethesda). 2020;10(8):2619–2628. doi:10.1534/g3.120.401244
- 60.↑
Hassan AM, Rajesh A, Asaad M, et al. Artificial intelligence and machine learning in prediction of surgical complications: current state, applications, and implications. Am Surg. 2023;89(1):25–30. doi:10.1177/00031348221101488
- 61.↑
Pinzolits RFJ. AI in academia: an overview of selected tools and their areas of application. MAP Educ Human. 2023;4(1):37–50. doi:10.53880/2744-2373.2023.4.37
- 62.
deLaubell L. Scite. Charleston Advisor. 2023;24(4):60–63. doi:10.5260/chara.24.4.60
- 63.↑
Whitfield S, Hofmann MA. Elicit: AI literature review research assistant. Public Services Quarter. 2023;19(3):201–207. doi:10.1080/15228959.2023.2224125
- 64.↑
Elleman AM, Oslund EL. Reading comprehension research: implications for practice and policy. Policy Insights Behav Brain Sci. 2019;6(1):3–11. doi:10.1177/2372732218816339
- 65.↑
Kennedy U, Paterson M, Clark N. Using a gradient boosted model for case ascertainment from free-text veterinary records. Prev Vet Med. 2023;212:105850. doi:10.1016/j.prevetmed.2023.105850
- 66.
Szlosek D, Coyne M, Riggott J, Knight K, McCrann DJ, Kincaid D. Development and validation of a machine learning model for clinical wellness visit classification in cats and dogs. Front Vet Sci. 2024;11. doi:10.3389/fvets.2024.1348162
- 67.↑
Ford E, Oswald M, Hassan L, Bozentko K, Nenadic G, Cassell J. Should free-text data in electronic medical records be shared for research? A citizens' jury study in the UK. J Med Ethics. 2020;46(6):367–377. doi:10.1136/medethics-2019-105472
- 68.↑
Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold. Nature. 2021;596(7873):583–589. doi:10.1038/s41586-021-03819-2
- 69.↑
da Silva RGL. The advancement of artificial intelligence in biomedical research and health innovation: challenges and opportunities in emerging economies. Global Health. 2024;20(1):44. doi:10.1186/s12992-024-01049-5
- 70.↑
Khlaif ZN, Mousa A, Hattab MK, et al. The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Med Educ. 2023;9:e47049. doi:10.2196/47049
- 71.↑
Khalifa M, Albadawy M. Using artificial intelligence in academic writing and research: an essential productivity tool. Comp Method Progr Biomed Update. 2024;5:100145. doi:10.1016/j.cmpbup.2024.100145
- 72.↑
Doshi AR, Hauser OP. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci Adv. 2024;10(28). doi:10.1126/sciadv.adn5290
- 74.↑
Fardinpour A, Pedram MM, Burkle M. Intelligent learning management systems. Int J Distance Educ Tech. 2014;12(4):19–31. doi:10.4018/ijdet.2014100102
- 75.↑
Burr SA, Gale T, Kisielewska J, et al. A narrative review of adaptive testing and its application to medical education. MedEdPublish. 2023;13:221. doi:10.12688/mep.19844.1
- 76.↑
Guerrero DT, Asaad M, Rajesh A, Hassan A, Butler CE. Advancing surgical education: the use of artificial intelligence in surgical training. Am Surg. 2023;89(1):49–54. doi:10.1177/00031348221101503
- 77.↑
Milani M. The art of veterinary practice: decision-making and quality communication. Can Vet J. 2012;53(2):199–200.
- 78.↑
Wang S, Mao X, Wang F, Zuo X, Fan C. Data storage using DNA. Adv Material. 2024;36(6). doi:10.1002/adma.202307499
- 79.↑
Lustgarten JL, Zehnder A, Shipman W, Gancher E, Webb TL. Veterinary informatics: forging the future between veterinary medicine, human medicine, and one health initiatives—a joint paper by the Association for Veterinary Informatics (AVI) and the CTSA One Health Alliance (COHA). JAMIA Open. 2020;3(2):306–317. doi:10.1093/jamiaopen/ooaa005
- 80.↑
Gianfrancesco MA, Tamang S, Yazdany J, Schmajuk G. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern Med. 2018;178(11):1544. doi:10.1001/jamainternmed.2018.3763
- 81.↑
Malin B, Goodman K. Between access and privacy: challenges in sharing health data. Yearb Med Inform. 2018;27(01):055–059. doi:10.1055/s-0038-1641216
- 82.↑
Pool R, Rusch E, eds. Principles and Obstacles for Sharing Data from Environmental Health Research. National Academies Press; 2016. doi:10.17226/21703
- 83.↑
DeCamp M, Lindvall C. Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc. 2020;27(12):2020–2023. doi:10.1093/jamia/ocaa094
- 84.↑
Basran PS, Appleby RB. What's in the box? A toolbox for safe deployment of artificial intelligence in veterinary medicine. J Am Vet Med Assoc. 2024;262(8):1090–1098. doi:10.2460/javma.24.01.0027
- 85.
Watson J, Hutyra CA, Clancy SM, et al. Overcoming barriers to the adoption and implementation of predictive modeling and machine learning in clinical care: what can we learn from US academic medical centers? JAMIA Open. 2020;3(2):167–172. doi:10.1093/jamiaopen/ooz046
- 86.↑
Sendak M, Vidal D, Trujillo S, Singh K, Liu X, Balu S. Editorial: surfacing best practices for AI software development and integration in healthcare. Front Digit Health. 2023;5:1150875. doi:10.3389/fdgth.2023.1150875
- 87.↑
Cohen EB, Gordon IK. First, do no harm. Ethical and legal issues of artificial intelligence and machine learning in veterinary radiology and radiation oncology. Vet Radiol Ultrasound. 2022;63(S1):840–850. doi:10.1111/vru.13171
- 88.↑
Robertson C, Woods A, Bergstrand K, Findley J, Balser C, Slepian MJ. Diverse patients' attitudes towards artificial intelligence (AI) in diagnosis. PLOS Digital Health. 2023;2(5):e0000237. doi:10.1371/journal.pdig.0000237
- 89.↑
Jaremko JL, Azar M, Bromwich R, et al. Canadian Association of Radiologists white paper on ethical and legal issues related to artificial intelligence in radiology. Can Assoc Radiol J. 2019;70(2):107–118. doi:10.1016/j.carj.2019.03.001
- 90.↑
Giray L. AI shaming: the silent stigma among academic writers and researchers. Ann Biomed Eng. 2024;52(9):2319–2324. doi:10.1007/s10439-024-03582-1
- 91.↑
Hype cycle for artificial intelligence, 2024. Gartner Research. 2024. Accessed September 24, 2024. https://www.gartner.com/en/documents/5505695