The Future of AI Enabled Health 2025

Page 24 of 30 · WEF_The_Future_of_AI_Enabled_Health_2025.pdf

From leaders with good intentions to leaders who make responsible technical decisions Health leaders, both public and private, often defer technical decisions to experts, but CEOs and clinicians should upskill and engage with technical matters, bringing a healthy scepticism to the debate. By challenging technical ideas, health leaders can ensure alignment with the ambition they pursue. As an example, interoperability is almost never a requirement in public electronic health records, yet it is a concern raised by policy-makers. Additionally, as health workers are critical leaders in the adoption of AI and its success in scaling, capacity-building is crucial – for example, by including AI in medical curricula to build capacity from an early stage. In the future, understanding AI’s potential, limits and risks will be a core skill for CEOs and other health leaders, not just CTOs, enabling them to make strategic decisions that will serve their broader vision. From waiting for guidelines to proactively building trust Doubts and distrust are slowing down the scaling of AI in health, and while regulation is often seen as a solution, leaders should not rely on it as a silver bullet, especially if it leads to over-regulation and excessive constraints. Premature regulation could stifle innovation, and there is broad consensus that effective regulation will lag behind technological advances. As the sector enters the AI for health era, leaders cannot assume that existing regulations will fully protect patients, including in relation to privacy issues, cybersecurity and ethical concerns. Instead, they should adopt phased and flexible approaches that are proportionate to the associated risks. Therefore, they should proactively engage their organization in bolstering post-market surveillance to detect as soon as possible, and with full transparency, early signals of AI-related risks. In addition, organizations should consider AI ethics committees and principles, similar to bioethics in healthcare, to make informed ethical decisions with known information that will stand the test of time. Leaders can begin to build trust even before regulations are in place by steering their organization in a way that ensures that existing guidelines and standards evolve and are fit for purpose. From dispersed data to deliberate integration Access to data remains a significant concern, reducing both trust and AI performance. Datasets can be biased, and not all data is accessible, promoting the perception that some players are hindering others from innovating, which can stifle the overall growth and potential of AI. To overcome these challenges and ensure equitable access to quality data as a common foundation for AI infrastructure (see point 3), leaders must advocate for globally connected but locally controlled datasets, including for broader medical data, such as dental information and socioeconomic data. This approach will not only preserve local ownership and data protection but also promote collaboration, ensuring that innovation can thrive on a global scale while addressing the specific needs and concerns of individual regions. Such an approach would ensure rapid success in achieving point 1 with common data exchange models and basic architecture.4 56 The Future of AI-Enabled Health: Leading the Way 24
Ask AI what this page says about a topic: