How Do You Explain Artificial Intelligence?

artificial intelligence graphic

As part of its commitment to supporting innovation as well as safety and effectiveness, the FDA has been giving artificial intelligence (AI) a lot of attention over the past few years. One topic of discussion centers on transparency and explainability: it’s important to have it, but how do you do it? And how do you do it for a wide variety of stakeholders?

A panel of legal and technical experts gathered last week for a virtual briefing on this topic as it applies to regulation and liability. Hosted by Epstein Becker Green, the briefing covered issues related to employment law as well as healthcare and life sciences.

What’s the Problem?

Many AI-based systems have a “black box” problem. Black box AI means the tool analyzes data, recognizes patterns from that data, and reaches conclusions based on those findings, but how it happens is a mystery.

In the context of AI/machine learning-enabled medical devices, the physician doesn’t know how it works and can’t explain to the patient how it reached its conclusion. In many cases, even the programmers that built the algorithm don’t know how it works. This is a problem because if physicians and patients already don’t trust AI/ML-based tools, they’re surely not going to trust a black box AI/ML-based tool. Without trust, the device likely won’t get used. And if it makes an inaccurate decision that leads to patient harm, that’s an even bigger problem.

The Advantages of Transparent, Explainable AI

Transparency is the preferred alternative to black box AI. Because AI/ML-enabled medical device software learns over time, it’s important for developers and regulators to understand the information needed to help providers and patients make informed choices. Providers need to know how the information will help patients. Patients want to know how this software will benefit their health. Physicians have to understand how it works well enough to explain to patients how it works so they can decide if it’s right for them.

Transparency also relates to informed consent. Patients must know when they are the subject of AI/ML and their rights.

Transparency has other advantages:

• Lowers the risk of bias – the developer “has to explain has to be able to explain how they approached the problem, why a certain technology was used, and what data sets were used,” said AI expert Evert Haasdijk, senior manager, Forensic at Deloitte, for one of the company’s insights reports. “Others have to be able to audit or replicate the process if needed.”

• Improves accuracy – mitigating bias leads to more accurate data. This is especially important when used in medical device software.

• Fits with the concept of fundamental fairness – stakeholders can deem a decision fair if it’s transparent and explainable.

Regulatory Considerations for AI/ML-Enabled Medical Devices

During the life sciences breakout session, Michael Zagorski, a strategic consultant for EBG Advisors, and Nathaniel Glasser, a member of Epstein Becker Green, discussed the requirements for medical device approval or clearance in the U.S. A point of note: when preparing a submission, think about, and develop a plan for, how you’ll manage software modifications. Especially for devices that get “smarter” over time, you’ll need a predetermined change control plan. Unregulated devices may benefit from this type of plan, as well.

Further reading on the FDA’s thoughts on AI/ML:

Virtual Public Workshop on AI/ML Transparency

Good Machine Learning Practice for Medical Device Development

Artificial Intelligence Action Plan

National Institute of Standards and Technology

Concluding plug: Does your medical device/digital health company want to communicate the nuances of AI/ML in a way that resonates with your audience? Partner with a medical device content writer with the trifecta you need: superior industry knowledge, expert writing chops, and content marketing expertise. Get in touch to find out how we can work together.

Comment on this post

%d bloggers like this: