The trail to accountable AI
At Kaiser Permanente, AI instruments should drive our core mission of delivering high-quality and reasonably priced look after our members. Because of this AI applied sciences should exhibit a “return on well being,” resembling improved affected person outcomes and experiences.
We consider AI instruments for security, effectiveness, accuracy, and fairness. Kaiser Permanente is lucky to have some of the complete datasets within the nation, due to our numerous membership base and highly effective digital well being report system. We are able to use this anonymized knowledge to develop and check our AI instruments earlier than we ever deploy them for our sufferers, care suppliers, and communities.
We’re cautious to guarantee that the AI instruments we use help the supply of equitable, evidence-based look after our members and communities. We do that by testing and validating the accuracy of AI instruments throughout our numerous populations. We’re additionally working to develop and deploy AI instruments that may assist us determine and proactively handle the well being and social wants of our members. This may result in extra equitable well being outcomes.
Lastly, as soon as a brand new AI device is applied, we constantly monitor its outcomes to make sure it’s working as meant. We keep vigilant; AI expertise is quickly advancing, and its purposes are always altering.
Policymakers may help set guardrails
Whereas Kaiser Permanente and different main well being care organizations work to advance accountable AI, policymakers have a job to play too. We encourage motion within the following areas:
- Nationwide AI oversight framework — An oversight framework ought to present an overarching construction for tips, requirements, and instruments. It needs to be versatile and adaptable to maintain tempo with quickly evolving expertise. New breakthroughs in AI are occurring month-to-month.
- Requirements governing AI in well being care — Policymakers ought to work with well being care leaders to develop nationwide, industry-specific requirements to control the use, improvement, and ethics of AI in well being care. By working intently with well being care leaders, policymakers can set up requirements which might be efficient, helpful, well timed, and never overly prescriptive. That is necessary as a result of requirements which might be too inflexible can stifle innovation, which might restrict the power of sufferers and suppliers to expertise the various advantages AI instruments may assist ship.
Guardrails: Progress thus far
The Nationwide Academy of Drugs convened a steering committee to ascertain a Well being Care AI Code of Conduct that attracts from well being care and expertise specialists, together with Kaiser Permanente. This can be a promising begin to creating an oversight framework.
As well as, Kaiser Permanente appreciates the chance to be an inaugural member of the U.S. AI Security Institute Consortium. The consortium is a multisector work group setting security requirements for the event and use of AI, with a dedication to defending innovation.
Concerns for policymakers
As policymakers develop AI requirements, we urge them to maintain a couple of necessary factors high of thoughts.
- Lack of coordination creates confusion. Authorities our bodies ought to coordinate on the federal and state ranges to make sure AI requirements are constant and never duplicative or conflicting.
- Requirements have to be adaptable. As well being care organizations proceed to discover new methods to enhance affected person care, it can be crucial for them to work with regulators and policymakers to verify requirements may be tailored by organizations of all sizes and ranges of sophistication and infrastructure. This may enable all sufferers to learn from AI applied sciences whereas additionally being protected against potential hurt.
AI has monumental potential to assist make our nation’s well being care system extra strong, accessible, environment friendly, and equitable. At Kaiser Permanente, we’re enthusiastic about AI’s future, and are desirous to work with policymakers and different well being care leaders to make sure all sufferers can profit.