Entrepreneurs across the country desperate to bring the power of AI into health care are urging Washington to consider the risk of blocking innovation, bringing into focus a chasm between the startup world’s race toward deployment and regulators’ attempts to protect patients from harmful, biased algorithms.
Over the next two months as the Department of Health and Human Services stands up its own AI task force — a mandate under the White House’s recent 111-page order on the use and regulation of artificial intelligence — entrepreneurs tell STAT they’re anxious for clarity on potential new demands for auditing and disclosing internal tests and data sets, some of which they fear could introduce unnecessary documentation. Once it’s formed, the task force has one year to issue a strategic plan.
There’s a risk, for instance, that HHS regulations focus too narrowly on evaluating models’ training datasets as a way to root out algorithmic bias, said Neal Khosla, founder of Curai, a health-focused machine learning company that offers virtual care services.
To submit a correction request, please visit our Contact Us page.
STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect