WMA Statement on Artificial and Augmented Intelligence in Medical Care


Adopted by the 76th WMA General Assembly, Porto, Portugal, October 2025

 

PREAMBLE

  1. The World Medical Association (WMA) recognizes that artificial intelligence (AI) is rapidly transforming all sectors, including healthcare. In this statement, the WMA reaffirms its commitment to patient-centered, physician-led care by emphasizing the concept of augmented intelligence – a framing that highlights AI’s role in augmenting human judgment – by strengthening rather than supplanting it, while recognizing that in specific, well-defined tasks AI may perform independently but always under human accountability. Through augmentation, AI is supporting rather than replacing human judgment, empathy, and accountability.
  2. Building on lessons learned from early deployments, the WMA sets out principles that maximize AI’s benefits while mitigating its risks, ensuring that its development, regulation and use remain consistent with medical ethics, international human-rights standards and the public’s trust in the profession.

DEFINITIONS AND SCOPE

  1. To promote clarity across jurisdictions while embedding the augmented intelligence perspective, the WMA uses the following working definitions in the healthcare ecosystem:
  • Artificial Intelligence (AI): Computer systems designed to perform tasks that normally require human intelligence – such as learning, problem-solving, understanding language, and recognizing patterns.
  • Augmented Intelligence: Use of artificial intelligence designed to support—not replace—human capabilities in healthcare.
  • Physician-in-the-Loop (PITL): an extension of the general “human-in-the-loop” principle whereby a licensed physician—rather than any user—must review and retain final authority over all AI outputs before they shape clinical care. Where clinical care involves multidisciplinary teams, PITL implementation should ensure that all relevant licensed professionals are adequately consulted, while the physician retains ultimate clinical responsibility.
  1. Emphasis on “augmented”
  • The term signals a human-centered approach to AI—one that reinforces the physician’s role as the final decision-maker. Rather than viewing AI as a replacement, augmented intelligence frames these tools as extensions of clinical expertise, designed to support – not replace – professional judgment, empathy, and responsibility.
  • While “AI” is widely understood as artificial intelligence, emphasizing the augmented perspective helps ensure that systems are designed, validated, regulated, and trusted with the right ethical priorities.
  • For the medical profession, this framing also enables more effective advocacy—especially when engaging with policymakers, regulators, and stakeholders who default to the broader term AI. It equips physicians to promote technologies that truly align with the goals of ethical, patient-centered care.
  1. Scope and audience
  • This statement aims to apply to all uses of AI in medicine, including clinical care and research, where AI primarily augments human decision-making. AI systems in administrative and educational contexts should be applied responsibly and with appropriate human oversight.
  • Its principles address physicians, other healthcare professionals, healthcare organizations, developers, regulators, payers, academic institutions, and industry partners, each of whom shares responsibility for ensuring that AI remains a safe, equitable, transparent, and ethically-governed tool in the delivery of healthcare worldwide.

GUIDING PRINCIPLES FOR AI IN HEALTHCARE

  1. Human-centricity: Human-centricity in AI prioritizes human needs, values, and wellbeing above technological capabilities or performance metrics. This principle includes:
  • Maintaining and respecting patient dignity, autonomy, and rights through meaningful consent for AI use.
  • Preserving patient health and well-being, and the human connection as the paramount considerations.
  • Embedding cultural competence to ensure AI systems respect diverse patient values, clinical needs, languages, and health beliefs.
  1. Physician well-being: The well-being of physicians and other clinicians must be safeguarded, recognizing that reducing administrative burden and avoiding unnecessary cognitive load are essential not only for supporting healthcare professionals but also for ensuring the quality and safety of patient care.
  2. AI is a Tool: AI should serve as a means to support healthcare goals rather than an end in itself. Unlike traditional medical tools, AI systems may appear to learn and adapt without continuous human input, making it essential to pair their use with strong human oversight and ethical governance.
  3. Accountability: AI integration does not diminish physician responsibility for patient welfare and advocacy. Consistent with the PITL principle, physicians must continue exercising professional judgment, and the final responsibility and accountability for diagnosis, indication, and therapy must always lie with the physician. At the same time, the growing prevalence of these tools necessitates clearly distributed accountability. Responsibility should be appropriately allocated among all stakeholders, including but not limited to developers, healthcare organizations, regulators, researchers and clinicians.
  4. Transparency, Explainability, and Trustworthiness:
  • AI systems must be designed and developed in ways that ensure their outputs and recommendations can be meaningfully understood by their intended end users—whether physicians, other healthcare professionals, or patients—within the relevant clinical context. Transparency extends beyond the “black box” paradigm, while explainability provides insight into the basis for specific outputs, thereby fostering trust and enabling responsible use. Transparency requirements and disclosures must be tailored to the needs of physicians and patients without adding paperwork or extra administrative tasks. Ensuring these qualities is a shared responsibility across all stakeholders, including developers, healthcare organizations, regulators, researchers, and clinicians.
  • Mechanisms should exist for meaningful challenges of healthcare AI outputs, enabling patients and clinicians – including physicians – to question, review, or override AI recommendations when appropriate. This capacity is essential for building clinical trust, without which clinicians may reject valuable AI tools or overly rely on opaque systems.
  • Explainability exists on a spectrum, with some complex models functioning as “black boxes” where only input/output relationships can be observed. The level of explainability required should generally be proportional to the clinical risk involved and the degree of autonomy granted to the system. In high-stakes contexts such as life-and-death decision-making, additional safeguards and oversight must be in place whenever full explainability cannot be achieved.
  1. Safe deployment: Safe deployment of AI in healthcare requires real-world validation demonstrating consistent performance, clinical efficacy, and usability before widespread adoption. Before clinical deployment, AI systems must also undergo rigorous ethical and health equity impact assessments that are context-sensitive and adapted to the specific healthcare setting and population, with particular attention to vulnerable and underrepresented groups. Implementation must include continuous performance monitoring, feedback mechanisms, and iterative improvement protocols to ensure sustained benefit and global accessibility. Risks and harmful consequences, including bias, must be properly understood, anticipated, and mitigated.
  2. Equitable implementation: New and beneficial AI healthcare tools must be developed and deployed equitably, with the goal of being accessible worldwide. Equitable implementation should ultimately bridge gaps in healthcare access, treatment, and outcomes., while expanding access to technology across disparate health care facilities.
  3. Data governance: All stakeholders must maintain the highest standards of data collection, storage, processing, and sharing to protect patient privacy, and institutional trust. This principle is foundational because healthcare AI depends on data access. Transparency around data provenance – including the origin, diversity, and quality of datasets used to train AI systems – must also be ensured to build trust and verify that data appropriately represents the patients being served.
  4. Environmental impact: Effective implementation of AI in healthcare requires careful consideration of its environmental impact and a strong commitment to sustainability. Environmental responsibility must be integrated alongside clinical validation to ensure that new technologies improve care while minimizing harm to the planet.

PHYSICIAN ROLES AND RESPONSIBILITIES

  1. Clinical Judgment and Accountability: As emphasized in the PITL principle, physician judgment remains essential when using AI in healthcare, serving as both an ethical imperative and a practical necessity. Physicians must maintain professional autonomy and clinical independence to act in the best interests of patients, consistent with the WMA Declaration of Seoul.
  2. Patient Advocacy: Physicians must safeguard patient health, well-being, and safety, ensuring that AI tools are only used in ways that genuinely benefit patients. Patient safety must remain a fundamental priority, whether or not augmented intelligence is applied.
  3. AI tool development: Physicians should be involved throughout the development and implementation of AI technologies in healthcare. They must participate in decision-making processes about technology and its use from the outset and be empowered to scrutinize new innovations, including for usability.
  4. Maintenance of competencies: Physicians must maintain core clinical expertise while also being educated and trained to work responsibly with AI systems. Delegation of tasks to AI must not erode the human capability required for safe, safety-critical care or for continuity when AI systems are unavailable or unreliable. Healthcare organizations should support this through ongoing education, simulation-based refreshers, periodic skills maintenance, and documented failover procedures that enable clinicians to critically appraise, override, and – when necessary – perform essential tasks independently.
  5. Incident reporting: Physicians must be empowered to report incidents and question outcomes resulting from the use of AI in healthcare.

PATIENT RIGHTS AND ENGAGEMENT

  1. While core patient rights are covered in existing WMA policies, AI introduces new risks – especially due to its reliance on data – that require focused ethical attention.
  2. Informed consent: Given AI systems’ reliance on patient health information, appropriate safeguards for data use are crucial. The principles of informed consent and transparency, building upon the WMA Declaration of Lisbon’s affirmation of patients’ rights to information and self-determination, must be rigorously applied in healthcare involving AI. Where possible, patients should be informed about the role AI plays in their care in ways that are understandable and meaningful, while physicians retain responsibility for ensuring safe and appropriate use of AI. In circumstances where full technical comprehension is impractical, informed consent may reasonably extend to a ‘consent for governance’ model, whereby patients place justified trust in physicians, healthcare institutions, and regulatory oversight to uphold their rights, safety, and welfare.
  3. Data rights: Patients must be informed about AI systems’ limitations and potential for error, as well as how physician oversight helps to ensures their protection. Patients should retain the right to request removal of their data from AI systems where feasible and legally permissible, and the right to understand how their data contributes to their care.
  4. Patient autonomy and explanation rights: Patient autonomy must be preserved through meaningful consent processes. Patients should retain the right, where feasible, to refuse AI-mediated interventions and request human-only assessment. Where such refusal is not possible due to systemic integration of AI, safeguards must ensure that patients’ data remain anonymous and non-traceable. Patients must have access to understandable and non-biased explanations of how AI contributes to their care, tailored to their information needs and preferences. They must also retain the right to dispute AI-generated recommendations they believe to be erroneous and to seek appropriate redress. This must extend to health insurer use of AI to determine patient care, payment, and coverage.
  5. Vulnerable patient population: Vulnerable patient cohorts, such as those with reduced decision-making capabilities, must not be disadvantaged or harmed through the use of AI in healthcare. Safeguards must include proactive bias mitigation, inclusive dataset development, and tailored consent or governance procedures to protect those unable to fully exercise autonomy. Particular attention must be given to ensuring that informed consent and data rights principles are applied in ways that do not reinforce structural inequities or exclude vulnerable groups from fair access to care.

GOVERNANCE, REGULATION, AND LIABILITY

  1. Up-to-date standards: Regulation, standards, and guidance must be suitably robust to safeguard patient safety and to ensure that the ethical rules of the medical profession are considered, with regulators empowered to stay up to date with developments and enforce legislation. Health care AI policies should be coordinated and consistent across government entities.
  2. Liability: Clear lines of legal liability must be established, including the AI developers, as well as physicians, and healthcare organisations. Accountability should be shared and proportional, reflecting each actor’s role in design, deployment, and use, rather than defaulting to a single actor alone.
  3. Continuous audit: There should be regular reviews and audits of regulatory processes and bodies surrounding AI in healthcare, including bias audits, ethical reviews, and participatory governance with physician input.

CLINICAL INTEGRATION AND IMPLEMENTATION OF HEALTH AI

  1. Tool evaluation and governance support: AI systems implemented in clinical settings must be validated for clinical relevance, safety, and effectiveness. Regular updates must be implemented to maintain security and ensure systems remain compatible with evolving clinical practices. In complex delivery environments, AI adoption must also be supported by appropriate governance structures that align clinical teams, leadership, and technology teams to ensure safe and responsible implementation.
  2. Workflow integration: AI tool implementation requires seamless integration within existing workflows to enhance usability and function as supportive additions rather than disruptive elements that impede efficient care delivery. Mechanisms should be established for tracking AI recommendations and their relationship to final clinical decisions.
  3. Post-deployment monitoring: Robust post-deployment monitoring is critical to ensure AI systems continue performing as intended. AI systems can drift from initial performance parameters when encountering new patient populations not represented in training data, as clinical practices evolve, or even within the same populations over time. Special attention should be directed toward monitoring outcomes in patient groups not adequately represented in training datasets.

DATA GOVERNANCE IMPLEMENTATION

  1. Patient data: All patient-identifiable information used or generated by AI systems must be collected, stored, and processed in strict accordance with the WMA Declaration of Taipei on Ethical Considerations Regarding Health Databases and Biobanks, as well as all applicable laws and regulations. Security safeguards are mandatory to preserve confidentiality, prevent unauthorised access, and uphold the therapeutic trust that underpins the patient–physician relationship. Additionally, patient data use must follow the same ethical safeguards applied to clinician data, including purpose limitation, transparency and consent, protection against misuse, and, where feasible, anonymisation and minimisation of data collected.
  2. Clinician data: AI systems are increasingly capturing granular data about clinicians (e.g., keystrokes, voice recordings, workflow metrics, prescribing patterns). Such information can support quality improvement and safety, but it also carries a risk of surveillance, punitive misuse, or erosion of professional autonomy. Therefore:
  • Purpose limitation: Clinician-identifiable data may be used only for clearly defined clinical, educational, or quality-improvement objectives that have been disclosed to—and agreed by—those clinicians.
  • Transparency and consent: Physicians must be informed, in advance and in comprehensible terms, what data are collected, how they will be analyzed, and who will have access. Explicit consent is required for uses beyond direct patient care or clinician-requested feedback.
  • Protection against misuse: Data must not be repurposed to penalize clinicians, set unrealistic performance quotas, or otherwise undermine the patient-physician relationship. Any secondary use (e.g. commercial analytics, administrative oversight) requires separate ethical review and consent.
  • Anonymization and minimization: Where feasible, clinician data should be de-identified or aggregated, and collection limited to the minimum necessary to achieve the stated purpose.
  1. Governance and oversight: Healthcare organisations must establish independent oversight mechanisms – such as, and not limited to, data protection officers, ethics committees, and periodic external audits – to verify compliance with safeguards for both patient and clinician data. Breaches or unauthorised uses must trigger transparent disclosure, remediation, and, where appropriate, sanctions. In addition, AI system developers must implement and support robust cybersecurity policies and controls to protect the confidentiality, integrity, and availability of health data throughout the AI system’s lifecycle.

MEDICAL EDUCATION AND CAPACITY BUILDING

  1. AI literacy requirements: Physicians must maintain appropriate AI literacy in the rapidly evolving AI landscape, including the knowledge and skills to use AI tools properly and the ability to critically understand and assess AI literacy must be systematically integrated into undergraduate medical curricula to ensure all physicians acquire a foundational understanding of these technologies. In addition, AI literacy should be reinforced through mandatory continuing professional development programs, enabling physicians to keep pace with evolving tools and to ensure their safe, ethical, and informed use in practice.
  2. Global equity: Focused attention must be directed toward bridging AI education gaps between regions, with particular emphasis on enhancing capacity in low- and middle-income countries (LMICs). Equitable distribution of educational resources and opportunities is essential to prevent widening disparities in AI implementation and ensure global benefit from these technological advances.

RESEARCH, INNOVATION AND EVALUATION

  1. Medical research standards: Any medical research involving AI, whether as the tool or object of study, must abide by accepted international standards of medical research, including, but not limited to, Good Clinical Practice, the WMA Declaration of Helsinki, and the WMA Declaration of Taipei.

GLOBAL CONSIDERATIONS AND COLLABORATION

  1. Cross-jurisdiction applicability: AI policies and infrastructures should, as far as possible, be aligned to have applicability across jurisdictions.
  2. Diverse healthcare environments: Appropriate AI solutions must be pursued across diverse healthcare environments, including low-resource settings. This requires supporting locally developed, context-sensitive innovations to ensure AI systems are responsive to local needs, realities, and resource constraints.
  3. Cultural Sensitivity: AI policies should respect varied cultural approaches while ensuring alignment with fundamental ethical principles, such as respect for human dignity, rights, and wellbeing.

RECOMMENDATIONS

  1. For physicians and medical associations: Medical professionals and their representative organizations should promote the development of comprehensive AI literacy programs, actively engage in AI governance structures – including contributing to the development of best practices for AI use in medicine – and uphold rigorous ethical standards to ensure quality patient care in an AI-enhanced healthcare environment. They should also consider creating educational materials for patients to support transparency and informed understanding of AI in healthcare.
  2. For healthcare facilities: Healthcare institutions must establish robust governance frameworks for the safe adoption of AI technologies and implement continuous monitoring processes. Organizations should balance innovation with safety considerations and maintain respect for clinical judgment when deploying AI systems. Importantly, AI implementation should be pursued when it demonstrably serves patients’ interests, without mandating AI use as a condition for licensure, participation, or reimbursement.
  3. For technology developers: Technology companies and AI developers must prioritize co-design approaches with practicing physicians and provide transparency in system development, deployment and use. Sustained collaboration between clinical and technical experts throughout the entire development lifecycle is essential to create tools that enhance healthcare quality and equity and that effectively support clinical activity.
  4. For regulators and policymakers: In consultation with medical associations (and other health professions organisations), craft physician-informed regulations and foster international cooperation.
  5. For educational institutions: Embed AI training in curricula and support global capacity building.
  6. For researchers and innovators: Pursue ethical, equitable, and evidence-based AI advancements.

 

Appendix

Narrow AI:
Domain-specific applications confined to clearly defined clinical or administrative objectives.

Generative AI:
Models, often large-language models, that create new clinical content—such as documentation drafts or treatment-plan suggestions—based on training data.

Foundational Models:
Broad, continuously trained models that underpin multiple healthcare applications and therefore require ongoing domain-specific oversight.

Machine learning:
A subset of artificial intelligence in which computer algorithms autonomously improve their performance at a specific task by learning complex relationships or identifying patterns in data, rather than by following explicit, pre-programmed instructions.

Patient-Physician Relationship:
Trust can be enhanced in the patient-physician relationship when:

-Physicians transparently discuss the role of AI in patient care
-AI systems demonstrably improve quality or safety outcomes
-Patients clearly understand how their data is used and protected and how data governance is organized.
-Patients are offered more time with their physician

 

Statement
AI, Artificial Intelligence, Augmented Intelligence, Digital Health, Medical Technology

WMA Statement on Digital Health

Adopted by the 60th WMA General Assembly, New Delhi, India, Octo...

WMA Declaration of Kigali on the Ethical Use of Medical Technology

Adopted by the 53rd WMA General Assembly, Washington, DC, USA, O...

WMA Declaration of Taipei on Ethical Considerations regarding Health Databases and Biobanks

Adopted by the 53rd WMA General Assembly, Washington, DC, USA, O...

WMA Declaration of Lisbon on the Rights of the Patient

Adopted by the 34th World Medical Assembly, Lisbon, Portugal, Se...