The content below should not be construed as financial, investment, legal, or professional advice. It was generated with AI assistance and may include inaccuracies. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, adequacy, legality, usefulness, reliability, suitability, or availability of the content. Any reliance you place on such information is strictly at your own risk. Additional terms in our terms of use.

As Federal Oversight Retreats, Health AI Giants Write Their Own Rules

By AI Healthcare News Team

Dozens of tech companies and healthcare systems are taking the lead in creating guidelines for artificial intelligence in medicine while federal regulators struggle to keep pace. Groups like the Coalition for Health AI, which includes major players such as Mayo Clinic and Microsoft, are stepping up to fill the regulatory gap through self-regulation efforts.

With regulators lagging, tech and healthcare giants forge their own path in medical AI governance.

The lack of thorough federal oversight has led to a patchwork of state regulations. Colorado recently passed a detailed AI law that places limits on healthcare AI developers. Utah has created rules for mental health chatbots, while California is working on legislation to restrict AI use in insurance decisions.

More than 50 industry consortia have formed to address AI governance in healthcare. These include the Health AI Partnership and VALID AI, which aim to establish common standards for evaluating AI tools. They’re working to guarantee these technologies don’t produce biased results and comply with existing healthcare regulations. The typical implementation timeline for healthcare AI systems spans 12 to 24 months, allowing for thorough testing and integration.

The U.S. Department of Health and Human Services (HHS) has provided some guidance on AI use, emphasizing fairness in patient care decisions. The Office for Civil Rights is focusing on guaranteeing AI tools comply with nondiscrimination laws. Meanwhile, the FDA is promising future guardrails for AI applications in drug development. HHS is currently developing a comprehensive AI strategy that will be made public by January and will include assurance labs as a key component.

One major challenge is the unclear definition of AI, which complicates regulatory efforts. Privacy concerns also grow as AI use in healthcare expands. State-by-state regulation may lead to inconsistent policies across the country, prompting some states to work together to harmonize their approaches.

Industry-led initiatives are creating accreditation systems for quality assurance labs that vet AI tools. These labs must meet certification standards based on domain expertise and compliance with international standards.

As AI in healthcare continues to evolve rapidly, both industry leaders and regulators face the challenge of keeping policies current and effective. While self-regulation promotes innovation, questions remain about whether industry-led efforts alone can adequately protect patient safety and privacy in this fast-changing landscape.