Skip to Main Content

Investors are starting to back startups that offer privacy and security services to bolster health AI products already on the market while they wait for crucial safety and privacy regulations to take shape.

Though health leaders are racing to deploy generative AI products that can automatically transcribe doctor-patient conversations during medical appointments or churn through massive repositories of scientific research, they’re still flummoxed about how to measure their quality. Cybersecurity experts have warned that indiscriminately hooking third-party apps up to health system networks could expose sensitive data to hackers as ransomware attacks surge, and regulators and industry groups are rushing to set standards for responsible AI use.


Several government agencies and industry groups are working on rules and recommendations for avoiding bias and safety hazards within medical AI, but hospitals and startups are still in the dark about what exactly those will require and who’ll eventually be liable for any harm caused. It could be months, or even years, before these overlapping rules from multiple federal agencies are in place, and those rules might still change as the technology evolves. 

Get unlimited access to award-winning journalism and exclusive events.


STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.