The use of artificial intelligence (AI) is increasing in the field of veterinary medicine, but veterinary experts warn that the rush to adopt the technology raises some ethical considerations.
“A major difference between veterinary medicine and human medicine is that veterinarians have the ability to euthanize patients – which may be for a variety of medical and financial reasons – so the stakes of the diagnoses provided by AI algorithms are very high,” says Eli Cohen, associate clinical professor of radiology at NC State’s College of Veterinary Medicine. “Human AI products must be validated before being marketed, but there is currently no regulatory oversight for veterinary AI products.”
In a review for Veterinary radiology and ultrasound, Cohen discusses the ethical and legal issues raised by veterinary AI products currently in use. It also highlights key differences between veterinary AI and the AI used by human doctors.
AI is currently marketed to veterinarians for radiology and imaging, largely because there are not enough veterinary radiologists in practice to meet the demand. However, Cohen points out that AI image analysis is not the same as a trained radiologist interpreting images in light of an animal’s medical history and unique situation.
Although AI can accurately identify certain conditions on an X-ray, users should understand the potential limitations. For example, the AI may not be able to identify all possible conditions and may not be able to accurately distinguish between conditions that appear similar on X-rays but have different treatment courses. .
Currently, the FDA does not regulate AI in veterinary products as it does in human medicine. Veterinary products can reach the market without surveillance beyond that provided by the AI developer and/or company.
“AI and how it works is often a black box, meaning even the developer doesn’t know how they arrive at decisions or diagnoses,” Cohen says. “Add this to the companies’ lack of transparency in the development of AI, including how the AI was trained and validated, and you’re asking vets to use a diagnostic tool with no way to assess s it is accurate or not.
“Since vets often get a single visit to diagnose and treat a patient and are not always followed up, AI could provide misdiagnoses or incomplete diagnoses and a vet would have limited ability to identify, unless the case is investigated or a serious outcome occurs,” Cohen continues.
“AI is being marketed as a replacement or as having similar value to a radiologist’s interpretation because there is a gap in the market. The best use of AI in the future, and certainly in this initial phase of deployment, is with what’s called a radiologist in the loop, where the AI is used in conjunction with a radiologist, not in place of one,” says Cohen.
“This is the most ethical and defensible way to use this emerging technology: to leverage it to allow more veterinarians and companion animals to access radiologist consultations, but above all to allow experts in the field troubleshoot AI and prevent adverse effects and harm to patients.”
Cohen recommends that veterinary experts partner with AI developers to ensure the quality of the datasets used to train the algorithm, and that third-party validation tests be performed before AI tools are released to the public.
“Almost anything a veterinarian might diagnose on x-rays has the potential to be medium to high risk, meaning it may result in changes in medical treatment, surgery, or euthanasia, either due clinical diagnosis or the financial constraints of the client,” says Cohen. “This level of risk is the threshold that the FDA uses in human medicine to determine whether there should be a radiologist in the loop. We would be wise as a profession to adopt a similar model.
“AI is a powerful tool and will change the way medicine is practiced, but the best practice going forward will be to use it in concert with radiologists to improve access and quality of care for patients, at the instead of using it to replace these consultations.”
Eli B. Cohen et al, First, Do No Harm. Ethical and legal issues of artificial intelligence and machine learning in veterinary radiology and radiation oncology, Veterinary radiology and ultrasound (2022). DOI: 10.1111/vru.13171
North Carolina State University
Quote: Artificial Intelligence in Veterinary Medicine Raises Ethical Challenges (December 14, 2022) Retrieved December 15, 2022 from https://phys.org/news/2022-12-artificial-intelligence-veterinary-medicine-ethical.html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.