Quantcast
Channel: Endpoints News
Viewing all articles
Browse latest Browse all 1739

FDA grapples with its role in AI, and how to guide industry's adoption

$
0
0

As drug companies embrace artificial intelligence as a part of their research and development operations, regulators at the FDA are working to decide how and when they will play a role in overseeing the technology.

On Tuesday, the agency gathered industry executives, regulatory experts and academics to share viewpoints on using AI in drug development. They discussed everything from data quality issues to the transparency and explainability of AI models, and the biggest gaps in bringing this technology into the real world of healthcare.

FDA leaders didn’t conclude with any sweeping, dramatic plans for what’s next, but did express an openness, and even a desire, to see more ideas with AI. The agency is expected to publish guidance on regulating AI later this year.

The hope is the FDA can strike the right balance of sweeping principles alongside practical advice for specific uses. Multiple speakers brought up the perpetual challenge of regulatory agencies keeping pace with the fast-moving AI space. Some of the FDA’s top regulators said Tuesday that they hoped to bring more direction to drug developers interested in using AI.

Patrizia Cavazzoni

For instance, in using AI to monitor a drug’s safety, agency biologics chief Patrizia Cavazzoni acknowledged the industry has lacked the regulatory clarity to try out ideas. She added the agency is “very interested” in companies coming to the FDA with pilot programs or potential use cases.

“Ultimately, we need to be able to provide a greater level of predictability and certainty,” Cavazzoni said.

Dina Katabi

And panelists outlined a big decision for the FDA to make in setting the right role for the regulator. Dina Katabi, an MIT computer science professor, called how the government approached the internet an “amazing” example to consider. In that case, the government brought two things to the table, she said: a vision and the resources.

“Both of them are missing” for AI, Katabi said. “There is no specific, cohesive vision on, ‘This is what we’re going to do and here are the resources we are going to put behind it.'”

Other speakers argued that the FDA would be most helpful in focusing on smaller, pragmatic steps. Ryan Hoshi, AbbVie’s director of regulatory policy and intelligence and a former FDA policy analyst, suggested the agency start with building a consensus on guiding principles, terms and definitions.

Ryan Hoshi

“That’s the low-hanging fruit we could easily achieve,” Hoshi said.

Katabi, meanwhile, said that addressing the small problems can create lots of regulations and rules, at the risk of neglecting the biggest questions.

“We’ve been collecting the small, low-hanging fruit for over a decade now,” Katabi said, adding, “I think to move faster, we need more effort.”

Meanwhile, Charles Fisher, the CEO of Unlearn.AI, a startup trying to use AI to speed up trials, said regulatory bodies may not even be the biggest hurdle for adoption. He called his company’s interactions with US and European regulators “great,” saying they haven’t seen a risk-averse attitude or challenges in talking between the different languages of the biopharma, computer science and regulatory worlds.

“We encountered both of those problems much more in the industry with the sponsors than we do with the regulators,” Fisher said.


Viewing all articles
Browse latest Browse all 1739

Trending Articles