Search
Close this search box.

How to Manage AI Risks in Your Veterinary Practice

Picture of CNA
CNA

By Adam DeCarolis, PharmD Life Science Risk Control Consulting Director, CNA Insurance

 

This article was originally printed in the September/October 2024 issue of the California Veterinarian magazine.


As technologies and tools are created to practice veterinary medicine, it is important to consider basic best practices and potential pitfalls surrounding their use. With the emerging use of artificial intelligence (AI) in all facets of healthcare, such consideration has never been more important.


The most common uses of AI in healthcare are patient management analytics and radiological software and image processing. AI can now even make or suggest treatment choices; however, this is not without risk.


Unfortunately, there are no pre-market approval processes for devices intended for animal use. This is in stark contrast to human healthcare, in which the use of AI is regulated by the U.S. Food and Drug Administration (FDA). Without the protections and guardrails provided by the FDA, how can veterinary professionals be sure they are providing adequate care and protecting patients, owners, and practitioners from injurious software? It is important to ask if AI is being used by your practice’s vendors or consultants, as it may be hard to detect independently.


Risk Management Tactics

In undertaking due diligence on any third-party vendors who could be using AI to provide clinical information, focus on the following considerations.

1. Understand how the AI was developed.
Determine whether veterinary industry experts were involved from concept to finished product and beyond, ensuring that the project is rooted in real-world practice and that the technology can produce a product which is acceptable from clinical, ethical, and industry viewpoints. Verify if there has been a “peer review” of clinical decisions made by the AI, which would enhance the quality of AI by identifying flaws in the AI model, validating accepted clinical decisions, and building trust between users and developers.

It’s also important to know whether the AI data set is open or closed. Open data sets are publicly accessible and freely licensed but can vary in privacy and quality. Closed data sets are higher quality and offer specialization. Closed data sets also have restricted access and are selective about the data they include.

Lastly, research if the AI is a machine-learning algorithm or a generative platform. Machine learning is focused on making decisions based on existing data. An excellent example is training AI with millions of accepted radiology images to assist diagnoses, from broken bones to tumor grading. In contrast, generative AI aggregates data to come up with new predictions. This can be done using images, lab results, previous diagnoses, or new data sets.

 

2. Determine how medical errors are handled.
Look into whether the AI records any medical errors and if it offers a corrective action plan (CAPA). CAPAs should be performed to find the root cause of the error. The error should be recorded in a database for transparency and improvement of the product.


3. Note how the data is collected, protected, and used.

It is a best practice to know how the data you provide to a vendor is being used, other than providing clinical information back to your practice. Questions to ask include:
a. Is the third-party selling data for research?
b. Is the data remaining fully de-identified?
c. Who is sharing the profits generated from the sale of the data?


4. Work in conjunction with AI diagnosis or treatment plans, instead of relying wholly on its response.

AI should only be used to complement or supplement human expertise in clinical decision-making. Practitioners are also far superior in providing ethical judgment and context than AI.

Medical technology can transform and improve care in ways that were unimaginable in the past. AI can help practitioners stay current on treatment protocols, improve treatment outcomes, and most importantly “do no harm” if used responsibly and ethically.

The information, examples and suggestions presented in this material have been developed from sources believed to be reliable, but they should not be construed as legal or other professional advice. CNA accepts no responsibility for the accuracy or completeness of this material and recommends the consultation with competent legal counsel and/or other professional advisors before applying this material in any particular factual situations. This material is for illustrative purposes and is not intended to constitute a contract. Please remember that only the relevant insurance policy can provide the actual terms, coverages, amounts, conditions and exclusions for an insured. All products and services may not be available in all states and may be subject to change without notice. “CNA” is a registered trademark of CNA Financial Corporation. Certain CNA Financial Corporation subsidiaries use the “CNA” trademark in connection with insurance underwriting and claims activities. Copyright © 2024 CNA. All rights reserved.

 

The CVMA-PAC

It’s Not About Politics….It’s About Your Profession. The CVMA-PAC is a bipartisan political action committee whose purpose is to educate state legislators and candidates on issues of importance to the veterinary profession

Skip to content