Pro News

Startups oppose tech giants and health systems’ plan to lead on AI regulation

BY: RUTH READER | 03/11/2024 06:00 AM EDT

A backlash is building to a plan put forth by a coalition of tech giants and major health systems to establish a private sector-led system for evaluating artificial intelligence tools in health care.

The Coalition for Health AI, which counts Google, Microsoft, Johns Hopkins University and the Mayo Clinic among its members, announced last week that it will help establish assurance labs by the end of the summer to evaluate AI products.

The initiative has the endorsement of top U.S. officials, who have signed on as observers. FDA Commissioner Robert Califf spoke to the group at its event last week and said he supported its efforts.

But startups and their investors, who weren’t included in the planning, say they’re concerned these assurance labs are unproven and that they have the potential to unfairly benefit the entities running them.

Dr. John Halamka, who chairs CHAI’s board and is president of the Mayo Clinic Platform, said he thought major universities are likely to host the assurance labs.

That worries smaller companies and the firms that invest in them, since the universities are developing their own AI or collaborating with tech giants to develop products.

“Under CHAI’s proposal, several organizations that have been tasked with review authority actually operate their own AI incubator programs,” said Julie Yoo, general partner at venture capital firm Andreesen Horowitz. “Ultimately, the technologies developed in those incubators could be in direct competition with the ones they are tasked to review and validate.”

Why it matters: Advanced artificial intelligence is already making its way into health systems without much federal oversight.

Only HHS’ Office of the National Coordinator for Health Information Technology has issued rules aimed at boosting transparency around what goes into building the latest tools, which can learn over time and require regular monitoring.

The Food and Drug Administration so far has taken a circumspect approach to regulating AI, offering guidance but not rules. That’s in part because the agency doesn’t have enough staff to adequately police the technology, according to Califf, who sees a private sector-led system as a way around that problem.

But private sector-led regulation is hard to get right, say the coalition’s critics.

Brett Meeks, executive director of the Health Innovation Alliance, which advocates for people who work in and rely on health information technology, said that it can be hard to force a market to adopt new standards.

“It’s so expensive, not just to acquire the technology, but to maintain responsible AI practices that everyone’s trying to develop internally,” he said.

And a high barrier to entry makes it difficult for smaller companies to compete, said Punit Soni, founder and CEO of Suki AI, which makes artificial intelligence tools that aim to reduce doctors’ administrative burdens.

He said it’s telling that CHAI’s partners are all large academic and tech players.

“Working only with tech giants also increases the risk of regulatory capture by these large companies, which will also hinder innovation down the line,” he said.

Shamim Nemati, an associate professor of medicine at the University of California San Diego, said he would prefer AI be validated through more conventional means, such as government-regulated multi-site clinical trials.

His sentiments echo a recent paper calling for AI regulation to focus on patient-centered outcomes, published in the Journal of the American Medical Association.

Nemati said AI needs to be tested in a variety of settings, not just in large academic centers. It also needs to be fine-tuned everywhere it’s implemented.

CHAI responds: Brian Anderson, CHAI’s CEO and chief digital health physician at MITRE, a nonprofit that advises government agencies on technology, told POLITICO that if smaller firms and startups want a say in setting the standards for AI they should join the coalition.

Some 1,800 entities have already signed up, over 600 of which are health systems, he said.

“CHAI will be a failure if all it is is academic institutions and big tech leading the effort,” he said.

Halamka said that CHAI recognizes that firms in the AI sector differ and that membership dues will be on a sliding scale to ensure equitable access to its assurance labs.

Anderson also argues that the assurance labs, which will test models on retrospective data, will be more affordable than running multi-site clinical trials. And he said small rural health systems can become assurance labs too. So far, 30 health systems are preparing to stand up labs.

Even if they don’t, he said, CHAI is planning on setting up toolkits for members to help them fine tune AI to their specific populations and monitor it.

Copy link
Powered by Social Snap