Adopted by the 76th WMA General Assembly, Porto, Portugal, October 2025

 

PREAMBLE

  1. The World Medical Association (WMA) recognizes that artificial intelligence (AI) is rapidly transforming all sectors, including healthcare. In this statement, the WMA reaffirms its commitment to patient-centered, physician-led care by emphasizing the concept of augmented intelligence – a framing that highlights AI’s role in augmenting human judgment – by strengthening rather than supplanting it, while recognizing that in specific, well-defined tasks AI may perform independently but always under human accountability. Through augmentation, AI is supporting rather than replacing human judgment, empathy, and accountability.
  2. Building on lessons learned from early deployments, the WMA sets out principles that maximize AI’s benefits while mitigating its risks, ensuring that its development, regulation and use remain consistent with medical ethics, international human-rights standards and the public’s trust in the profession.

DEFINITIONS AND SCOPE

  1. To promote clarity across jurisdictions while embedding the augmented intelligence perspective, the WMA uses the following working definitions in the healthcare ecosystem:
  • Artificial Intelligence (AI): Computer systems designed to perform tasks that normally require human intelligence – such as learning, problem-solving, understanding language, and recognizing patterns.
  • Augmented Intelligence: Use of artificial intelligence designed to support—not replace—human capabilities in healthcare.
  • Physician-in-the-Loop (PITL): an extension of the general “human-in-the-loop” principle whereby a licensed physician—rather than any user—must review and retain final authority over all AI outputs before they shape clinical care. Where clinical care involves multidisciplinary teams, PITL implementation should ensure that all relevant licensed professionals are adequately consulted, while the physician retains ultimate clinical responsibility.
  1. Emphasis on “augmented”
  • The term signals a human-centered approach to AI—one that reinforces the physician’s role as the final decision-maker. Rather than viewing AI as a replacement, augmented intelligence frames these tools as extensions of clinical expertise, designed to support – not replace – professional judgment, empathy, and responsibility.
  • While “AI” is widely understood as artificial intelligence, emphasizing the augmented perspective helps ensure that systems are designed, validated, regulated, and trusted with the right ethical priorities.
  • For the medical profession, this framing also enables more effective advocacy—especially when engaging with policymakers, regulators, and stakeholders who default to the broader term AI. It equips physicians to promote technologies that truly align with the goals of ethical, patient-centered care.
  1. Scope and audience
  • This statement aims to apply to all uses of AI in medicine, including clinical care and research, where AI primarily augments human decision-making. AI systems in administrative and educational contexts should be applied responsibly and with appropriate human oversight.
  • Its principles address physicians, other healthcare professionals, healthcare organizations, developers, regulators, payers, academic institutions, and industry partners, each of whom shares responsibility for ensuring that AI remains a safe, equitable, transparent, and ethically-governed tool in the delivery of healthcare worldwide.

GUIDING PRINCIPLES FOR AI IN HEALTHCARE

  1. Human-centricity: Human-centricity in AI prioritizes human needs, values, and wellbeing above technological capabilities or performance metrics. This principle includes:
  • Maintaining and respecting patient dignity, autonomy, and rights through meaningful consent for AI use.
  • Preserving patient health and well-being, and the human connection as the paramount considerations.
  • Embedding cultural competence to ensure AI systems respect diverse patient values, clinical needs, languages, and health beliefs.
  1. Physician well-being: The well-being of physicians and other clinicians must be safeguarded, recognizing that reducing administrative burden and avoiding unnecessary cognitive load are essential not only for supporting healthcare professionals but also for ensuring the quality and safety of patient care.
  2. AI is a Tool: AI should serve as a means to support healthcare goals rather than an end in itself. Unlike traditional medical tools, AI systems may appear to learn and adapt without continuous human input, making it essential to pair their use with strong human oversight and ethical governance.
  3. Accountability: AI integration does not diminish physician responsibility for patient welfare and advocacy. Consistent with the PITL principle, physicians must continue exercising professional judgment, and the final responsibility and accountability for diagnosis, indication, and therapy must always lie with the physician. At the same time, the growing prevalence of these tools necessitates clearly distributed accountability. Responsibility should be appropriately allocated among all stakeholders, including but not limited to developers, healthcare organizations, regulators, researchers and clinicians.
  4. Transparency, Explainability, and Trustworthiness:
  • AI systems must be designed and developed in ways that ensure their outputs and recommendations can be meaningfully understood by their intended end users—whether physicians, other healthcare professionals, or patients—within the relevant clinical context. Transparency extends beyond the “black box” paradigm, while explainability provides insight into the basis for specific outputs, thereby fostering trust and enabling responsible use. Transparency requirements and disclosures must be tailored to the needs of physicians and patients without adding paperwork or extra administrative tasks. Ensuring these qualities is a shared responsibility across all stakeholders, including developers, healthcare organizations, regulators, researchers, and clinicians.
  • Mechanisms should exist for meaningful challenges of healthcare AI outputs, enabling patients and clinicians – including physicians – to question, review, or override AI recommendations when appropriate. This capacity is essential for building clinical trust, without which clinicians may reject valuable AI tools or overly rely on opaque systems.
  • Explainability exists on a spectrum, with some complex models functioning as “black boxes” where only input/output relationships can be observed. The level of explainability required should generally be proportional to the clinical risk involved and the degree of autonomy granted to the system. In high-stakes contexts such as life-and-death decision-making, additional safeguards and oversight must be in place whenever full explainability cannot be achieved.
  1. Safe deployment: Safe deployment of AI in healthcare requires real-world validation demonstrating consistent performance, clinical efficacy, and usability before widespread adoption. Before clinical deployment, AI systems must also undergo rigorous ethical and health equity impact assessments that are context-sensitive and adapted to the specific healthcare setting and population, with particular attention to vulnerable and underrepresented groups. Implementation must include continuous performance monitoring, feedback mechanisms, and iterative improvement protocols to ensure sustained benefit and global accessibility. Risks and harmful consequences, including bias, must be properly understood, anticipated, and mitigated.
  2. Equitable implementation: New and beneficial AI healthcare tools must be developed and deployed equitably, with the goal of being accessible worldwide. Equitable implementation should ultimately bridge gaps in healthcare access, treatment, and outcomes., while expanding access to technology across disparate health care facilities.
  3. Data governance: All stakeholders must maintain the highest standards of data collection, storage, processing, and sharing to protect patient privacy, and institutional trust. This principle is foundational because healthcare AI depends on data access. Transparency around data provenance – including the origin, diversity, and quality of datasets used to train AI systems – must also be ensured to build trust and verify that data appropriately represents the patients being served.
  4. Environmental impact: Effective implementation of AI in healthcare requires careful consideration of its environmental impact and a strong commitment to sustainability. Environmental responsibility must be integrated alongside clinical validation to ensure that new technologies improve care while minimizing harm to the planet.

PHYSICIAN ROLES AND RESPONSIBILITIES

  1. Clinical Judgment and Accountability: As emphasized in the PITL principle, physician judgment remains essential when using AI in healthcare, serving as both an ethical imperative and a practical necessity. Physicians must maintain professional autonomy and clinical independence to act in the best interests of patients, consistent with the WMA Declaration of Seoul.
  2. Patient Advocacy: Physicians must safeguard patient health, well-being, and safety, ensuring that AI tools are only used in ways that genuinely benefit patients. Patient safety must remain a fundamental priority, whether or not augmented intelligence is applied.
  3. AI tool development: Physicians should be involved throughout the development and implementation of AI technologies in healthcare. They must participate in decision-making processes about technology and its use from the outset and be empowered to scrutinize new innovations, including for usability.
  4. Maintenance of competencies: Physicians must maintain core clinical expertise while also being educated and trained to work responsibly with AI systems. Delegation of tasks to AI must not erode the human capability required for safe, safety-critical care or for continuity when AI systems are unavailable or unreliable. Healthcare organizations should support this through ongoing education, simulation-based refreshers, periodic skills maintenance, and documented failover procedures that enable clinicians to critically appraise, override, and – when necessary – perform essential tasks independently.
  5. Incident reporting: Physicians must be empowered to report incidents and question outcomes resulting from the use of AI in healthcare.

PATIENT RIGHTS AND ENGAGEMENT

  1. While core patient rights are covered in existing WMA policies, AI introduces new risks – especially due to its reliance on data – that require focused ethical attention.
  2. Informed consent: Given AI systems’ reliance on patient health information, appropriate safeguards for data use are crucial. The principles of informed consent and transparency, building upon the WMA Declaration of Lisbon’s affirmation of patients’ rights to information and self-determination, must be rigorously applied in healthcare involving AI. Where possible, patients should be informed about the role AI plays in their care in ways that are understandable and meaningful, while physicians retain responsibility for ensuring safe and appropriate use of AI. In circumstances where full technical comprehension is impractical, informed consent may reasonably extend to a ‘consent for governance’ model, whereby patients place justified trust in physicians, healthcare institutions, and regulatory oversight to uphold their rights, safety, and welfare.
  3. Data rights: Patients must be informed about AI systems’ limitations and potential for error, as well as how physician oversight helps to ensures their protection. Patients should retain the right to request removal of their data from AI systems where feasible and legally permissible, and the right to understand how their data contributes to their care.
  4. Patient autonomy and explanation rights: Patient autonomy must be preserved through meaningful consent processes. Patients should retain the right, where feasible, to refuse AI-mediated interventions and request human-only assessment. Where such refusal is not possible due to systemic integration of AI, safeguards must ensure that patients’ data remain anonymous and non-traceable. Patients must have access to understandable and non-biased explanations of how AI contributes to their care, tailored to their information needs and preferences. They must also retain the right to dispute AI-generated recommendations they believe to be erroneous and to seek appropriate redress. This must extend to health insurer use of AI to determine patient care, payment, and coverage.
  5. Vulnerable patient population: Vulnerable patient cohorts, such as those with reduced decision-making capabilities, must not be disadvantaged or harmed through the use of AI in healthcare. Safeguards must include proactive bias mitigation, inclusive dataset development, and tailored consent or governance procedures to protect those unable to fully exercise autonomy. Particular attention must be given to ensuring that informed consent and data rights principles are applied in ways that do not reinforce structural inequities or exclude vulnerable groups from fair access to care.

GOVERNANCE, REGULATION, AND LIABILITY

  1. Up-to-date standards: Regulation, standards, and guidance must be suitably robust to safeguard patient safety and to ensure that the ethical rules of the medical profession are considered, with regulators empowered to stay up to date with developments and enforce legislation. Health care AI policies should be coordinated and consistent across government entities.
  2. Liability: Clear lines of legal liability must be established, including the AI developers, as well as physicians, and healthcare organisations. Accountability should be shared and proportional, reflecting each actor’s role in design, deployment, and use, rather than defaulting to a single actor alone.
  3. Continuous audit: There should be regular reviews and audits of regulatory processes and bodies surrounding AI in healthcare, including bias audits, ethical reviews, and participatory governance with physician input.

CLINICAL INTEGRATION AND IMPLEMENTATION OF HEALTH AI

  1. Tool evaluation and governance support: AI systems implemented in clinical settings must be validated for clinical relevance, safety, and effectiveness. Regular updates must be implemented to maintain security and ensure systems remain compatible with evolving clinical practices. In complex delivery environments, AI adoption must also be supported by appropriate governance structures that align clinical teams, leadership, and technology teams to ensure safe and responsible implementation.
  2. Workflow integration: AI tool implementation requires seamless integration within existing workflows to enhance usability and function as supportive additions rather than disruptive elements that impede efficient care delivery. Mechanisms should be established for tracking AI recommendations and their relationship to final clinical decisions.
  3. Post-deployment monitoring: Robust post-deployment monitoring is critical to ensure AI systems continue performing as intended. AI systems can drift from initial performance parameters when encountering new patient populations not represented in training data, as clinical practices evolve, or even within the same populations over time. Special attention should be directed toward monitoring outcomes in patient groups not adequately represented in training datasets.

DATA GOVERNANCE IMPLEMENTATION

  1. Patient data: All patient-identifiable information used or generated by AI systems must be collected, stored, and processed in strict accordance with the WMA Declaration of Taipei on Ethical Considerations Regarding Health Databases and Biobanks, as well as all applicable laws and regulations. Security safeguards are mandatory to preserve confidentiality, prevent unauthorised access, and uphold the therapeutic trust that underpins the patient–physician relationship. Additionally, patient data use must follow the same ethical safeguards applied to clinician data, including purpose limitation, transparency and consent, protection against misuse, and, where feasible, anonymisation and minimisation of data collected.
  2. Clinician data: AI systems are increasingly capturing granular data about clinicians (e.g., keystrokes, voice recordings, workflow metrics, prescribing patterns). Such information can support quality improvement and safety, but it also carries a risk of surveillance, punitive misuse, or erosion of professional autonomy. Therefore:
  • Purpose limitation: Clinician-identifiable data may be used only for clearly defined clinical, educational, or quality-improvement objectives that have been disclosed to—and agreed by—those clinicians.
  • Transparency and consent: Physicians must be informed, in advance and in comprehensible terms, what data are collected, how they will be analyzed, and who will have access. Explicit consent is required for uses beyond direct patient care or clinician-requested feedback.
  • Protection against misuse: Data must not be repurposed to penalize clinicians, set unrealistic performance quotas, or otherwise undermine the patient-physician relationship. Any secondary use (e.g. commercial analytics, administrative oversight) requires separate ethical review and consent.
  • Anonymization and minimization: Where feasible, clinician data should be de-identified or aggregated, and collection limited to the minimum necessary to achieve the stated purpose.
  1. Governance and oversight: Healthcare organisations must establish independent oversight mechanisms – such as, and not limited to, data protection officers, ethics committees, and periodic external audits – to verify compliance with safeguards for both patient and clinician data. Breaches or unauthorised uses must trigger transparent disclosure, remediation, and, where appropriate, sanctions. In addition, AI system developers must implement and support robust cybersecurity policies and controls to protect the confidentiality, integrity, and availability of health data throughout the AI system’s lifecycle.

MEDICAL EDUCATION AND CAPACITY BUILDING

  1. AI literacy requirements: Physicians must maintain appropriate AI literacy in the rapidly evolving AI landscape, including the knowledge and skills to use AI tools properly and the ability to critically understand and assess AI literacy must be systematically integrated into undergraduate medical curricula to ensure all physicians acquire a foundational understanding of these technologies. In addition, AI literacy should be reinforced through mandatory continuing professional development programs, enabling physicians to keep pace with evolving tools and to ensure their safe, ethical, and informed use in practice.
  2. Global equity: Focused attention must be directed toward bridging AI education gaps between regions, with particular emphasis on enhancing capacity in low- and middle-income countries (LMICs). Equitable distribution of educational resources and opportunities is essential to prevent widening disparities in AI implementation and ensure global benefit from these technological advances.

RESEARCH, INNOVATION AND EVALUATION

  1. Medical research standards: Any medical research involving AI, whether as the tool or object of study, must abide by accepted international standards of medical research, including, but not limited to, Good Clinical Practice, the WMA Declaration of Helsinki, and the WMA Declaration of Taipei.

GLOBAL CONSIDERATIONS AND COLLABORATION

  1. Cross-jurisdiction applicability: AI policies and infrastructures should, as far as possible, be aligned to have applicability across jurisdictions.
  2. Diverse healthcare environments: Appropriate AI solutions must be pursued across diverse healthcare environments, including low-resource settings. This requires supporting locally developed, context-sensitive innovations to ensure AI systems are responsive to local needs, realities, and resource constraints.
  3. Cultural Sensitivity: AI policies should respect varied cultural approaches while ensuring alignment with fundamental ethical principles, such as respect for human dignity, rights, and wellbeing.

RECOMMENDATIONS

  1. For physicians and medical associations: Medical professionals and their representative organizations should promote the development of comprehensive AI literacy programs, actively engage in AI governance structures – including contributing to the development of best practices for AI use in medicine – and uphold rigorous ethical standards to ensure quality patient care in an AI-enhanced healthcare environment. They should also consider creating educational materials for patients to support transparency and informed understanding of AI in healthcare.
  2. For healthcare facilities: Healthcare institutions must establish robust governance frameworks for the safe adoption of AI technologies and implement continuous monitoring processes. Organizations should balance innovation with safety considerations and maintain respect for clinical judgment when deploying AI systems. Importantly, AI implementation should be pursued when it demonstrably serves patients’ interests, without mandating AI use as a condition for licensure, participation, or reimbursement.
  3. For technology developers: Technology companies and AI developers must prioritize co-design approaches with practicing physicians and provide transparency in system development, deployment and use. Sustained collaboration between clinical and technical experts throughout the entire development lifecycle is essential to create tools that enhance healthcare quality and equity and that effectively support clinical activity.
  4. For regulators and policymakers: In consultation with medical associations (and other health professions organisations), craft physician-informed regulations and foster international cooperation.
  5. For educational institutions: Embed AI training in curricula and support global capacity building.
  6. For researchers and innovators: Pursue ethical, equitable, and evidence-based AI advancements.

 

Appendix

Narrow AI:
Domain-specific applications confined to clearly defined clinical or administrative objectives.

Generative AI:
Models, often large-language models, that create new clinical content—such as documentation drafts or treatment-plan suggestions—based on training data.

Foundational Models:
Broad, continuously trained models that underpin multiple healthcare applications and therefore require ongoing domain-specific oversight.

Machine learning:
A subset of artificial intelligence in which computer algorithms autonomously improve their performance at a specific task by learning complex relationships or identifying patterns in data, rather than by following explicit, pre-programmed instructions.

Patient-Physician Relationship:
Trust can be enhanced in the patient-physician relationship when:

-Physicians transparently discuss the role of AI in patient care
-AI systems demonstrably improve quality or safety outcomes
-Patients clearly understand how their data is used and protected and how data governance is organized.
-Patients are offered more time with their physician

 

Adopted by the 70th WMA General Assembly, Tbilisi, Georgia, October 2019
and rescinded and archived by the 76th WMA General Assembly, Porto, Portugal, October 2025

 

PREAMBLE

Artificial Intelligence (AI) is the ability of a machine to simulate intelligent behavior, a quality that enables an entity to function appropriately and with foresight in its environment. The term AI covers a range of methods, techniques and systems. Common examples of AI systems include, but are not limited to, natural language processing (NLP), computer vision and machine learning. In health care, as in other sectors, AI solutions may include a combination of these systems and methods.

(Note: A glossary of terms appears as an appendix to this statement.)

In health care, a more appropriate term is “augmented intelligence”, an alternative conceptualization that more accurately reflects the purpose of such systems because they are intended to coexist with human decision-making (1). Therefore, in the remainder of this statement, AI refers to augmented intelligence.

An AI system utilizing machine learning employs an algorithm programmed to learn (“learner algorithm”) from data referred to as “training data.” The learner algorithm will then automatically adjust the machine learning model based on the training data. A “continuous learning system” updates the model without human oversight as new data is presented, whereas “locked learners” will not automatically update the model with new data. In health care, it is important to know whether the learner algorithm is eventually locked or whether the learner algorithm continues to learn once deployed into clinical practice in order to assess the systems for quality, safety, and bias. Being able to trace the source of training data is critical to understanding the risk associated with applying a health care AI system to individuals whose personal characteristics are significantly different than those in the training data set.

Health care AI generally describes methods, tools and solutions whose applications are focused on health care settings and patient care. In addition to clinical applications, there are many other applications of AI systems in health care including business operations, research, health care administration, and population health.

The concepts of AI and machine learning have quickly become attractive to health care organizations, but there is often no clear definition of terminology used. Many see AI as a technological panacea; however, realizing the promise of AI may have its challenges, since it might be hampered by evolving regulatory oversight to ensure safety and clinical efficacy, lack of widely accepted standards, liability issues, need for clear laws and regulations governing data uses, and a lack of shared understanding of terminology and definitions.

Some of the most promising uses for health care AI systems include predictive analytics, precision medicine, diagnostic imaging of diseases, and clinical decision support. Development in these areas is underway, and investments in AI have grown over the past several years [1]. Currently, health care AI systems have started to provide value in the realm of pattern recognition, NLP, and deep learning. Machine learning systems are designed to identify data errors without perpetuating them. However, health care AI systems do not replace the need for the patient-physician relationship. Such systems augment physician-provided medical care and do not replace it.

Health care AI systems must be, transparent, reproducible, and be trusted by both health care providers and patients. Systems must focus on users’ needs. Usability should be tested by participants who reflect similar needs and practice patterns of the end user, and systems must work effectively with people. Physicians will be more likely to accept AI systems that can be integrated into or improve their existing practice patterns, and also improve patient care.

Opportunities

Health care AI can offer a transformative set of tools to physicians and patients and has the potential to make health care safer and more efficient. By automating hospital and office processes, physician productivity would improve. The use of data mining to produce accurate useful data at the right time may improve electronic health records. and access to relevant patient information. Results of data mining may also provide evidence for trends that may serve to inform resource allocation and utilization decisions. New insights into diagnosis and best practices for treatment may be produced because of analyzing all known data about a patient. The potential also exists to improve the patient experience, patient safety, and treatment adherence.

Applications of health care AI to medical education include continuing medical education, training simulations, learning assistance, coaching for medical students and residents, and may provide objective assessment tools to evaluate competencies. These applications would help customize the medical education experience and facilitate independent individual or group learning.

There are a number of stakeholders and policy makers involved in shaping the evolution of AI in health care besides physicians. These include medical associations, businesses, governments, and those in the technology industry. Physicians have an unprecedented opportunity to positively inform and influence the discussions and debates currently taking place around AI. Physicians should proactively engage in these conversations in order to ensure that their perspectives are heard and incorporated into this rapidly developing technology.

Challenges

Developers and regulators of health care AI systems must ensure proper disclosure and note the benefits, limitations, and scope of appropriate use of such systems.  In turn, physicians will need to understand AI methods and systems in order to rely upon clinical recommendations. Instruction in the opportunities and limitations of health care AI systems must take place both with medical students and practicing physicians, as physician involvement is critical to successful evolution of the field. AI systems must always adhere to professional values and ethics of the medical profession.

Protecting confidentiality, control and ownership of patient data is a central tenet of the patient-physician relationship. Anonymization of data does not provide enough protection to a patient’s information when machine-learning algorithms can identify an individual from among large complex data sets when provided with as few as three data points, which could put patient data privacy at risk. Current expectations patients have for confidentiality of their personal information must be addressed, and new models that include consent and data stewardship developed.  Viable technical solutions to mitigate these risks are being explored and will be critical to widespread adoption of health care AI systems.

Data structure, and integrity are major challenges that need to be addressed when designing health care AI systems. The data sets on which machine learning systems are trained are created by humans and may reflect bias and contain errors. Because of this, these data sets will normalize errors and the biases inherent in their data sets. Minorities may be disadvantaged because there is less data available about minority populations. Another design consideration is how a model will be evaluated for accuracy and involves very careful analysis of the training data set and its relationship to the data set used to evaluate the algorithms.

Liability concerns present significant challenges to adoption. As existing and new oversight models develop health care AI systems, the developers of such systems will typically have the most knowledge of risks and be best positioned to mitigate the risk. As a result, developers of health care AI systems and those who mandate use of such systems must be accountable and liable for adverse events resulting from malfunction(s) or inaccuracy in output. Physicians are often frustrated with the usability of electronic health records. Systems designed to support team-based care and other workflow patterns but often fall short. In addition to human factors in the design and development of health care AI systems, significant consideration must be given to appropriate system deployment. Not every system can be deployed to every setting due to data source variations.

Work is already underway to advance governance and oversight of health care AI, including standards for medical care, intellectual property rights, certification procedures or government regulation, and ethical and legal considerations.

 

RECOMMENDATIONS

That the WMA:

  • Recognize the potential for improving patient outcomes and physicians’ professional satisfaction through the use of health care AI, provided they conform to the principles of medical ethics, confidentiality of patient data, and non-discrimination.
  • Support the process of setting priorities for health care AI.
  • Encourage the review of medical curricula and educational opportunities for patients, physicians, medical students, health administrators and other health care professionals to promote greater understanding of the many aspects, both positive and negative, of health care AI.

The WMA urges its member organizations to:

  • Find opportunities to bring the practicing physician’s perspective to the development, design, validation and implementation of health care AI.
  • Advocate for direct physician involvement in the development and management of health care AI and appropriate government and professional oversight for safe, effective, equitable, ethical, and accessible AI products and services.
  • Advocate that all healthcare AI systems be transparent, reproducible, and be trusted by both health care providers and patients.
  • Advocate for the primacy of the patient-physician relationship when developing and implementing health care AI systems. 

 

(1) For purposes of this statement, the term “health care AI” will be used to refer to systems that augment, not replace, the work of clinicians.

 

APPENDIX: GLOSSARY OF TERMS USED IN HEALTH CARE AUGMENTED INTELLIGENCE

Algorithm is a set of detailed, ordered instructions that are followed by a computer to solve a mathematical problem or to complete a computer process.

Artificial intelligence consists of a host of computational methods used to produce systems that perform tasks which exhibit intelligent behavior that is indistinguishable from human behavior.

Augmented intelligence (AI) is a conceptualization of artificial intelligence that focuses on artificial intelligence’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.

Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos and seeks to automate tasks that the human visual system can do.

Data mining is an interdisciplinary subfield of computer science and statistics whose overall goal is to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use.

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform specific tasks with minimal human interaction and without using explicit instructions, by learning from data and identification of patterns.

Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Training data is used to train an algorithm; it generally consists of a certain percentage of an overall dataset along with a testing set. As a rule, the better the training data, the better the algorithm performs. Once an algorithm is trained on a training set, it’s usually evaluated on a test set. The training set should be labelled or enriched to increase an algorithm’s confidence and accuracy.

 

Reference

[1] CB Insights. The Race for AI: Google, Baidu, Intel, Apple in a Rush to Grab Artificial Intelligence Startups. https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/.

Adopted by the 53rd WMA General Assembly, Washington, DC, USA, October 2002, and
revised by the 63rd WMA General Assembly, Bangkok, Thailand, October 2012 and
by the 74th
WMA General Assembly, Kigali, Rwanda, October 2023, and
renamed “Declaration of Kigali” by the 75th WMA General Assembly, Helsinki, Finland, October 2024

 

PREAMBLE

Medical technology has come to play a key role in modern medicine. It has helped provide significantly more effective means of prevention, diagnosis, treatment and rehabilitation of illness, for example through the development and use of information technology, such as telehealth, digital platforms and large-scale data collection and analyses, or the use of advanced machinery and software in areas like medical genetics and radiology, including assistive, artificial, and augmented intelligences.

The importance of technology for medical care will continue to grow and the WMA welcomes this progress. The continuous development of medical technologies – and their use in both clinical and research settings – will create enormous benefits for the medical profession, patients, and society.

However, as for all other activities in the medical profession, the use of medical technology for any purpose, must take place within the framework provided by the basic principles of medical ethics as stated in the WMA Declaration of Geneva: The Physician’s Pledge, the International Code of Medical Ethics and the Declaration of Helsinki.

Respect for human dignity and rights, patient autonomy, beneficence, confidentiality, privacy and fairness must be the key guiding points when medical technology is developed and used for medical purposes.

The rapidly developing use of big data has implications for confidentiality and privacy. Using data in ways which would damage patients’ trust in how health services handle confidential data would be counterproductive. This must be borne in mind when introducing new data driven technology. It is essential to preserve high ethical standards and achieve the right balance between protecting confidentiality and using technology to improve patient care.

Additionally, bias through for example social differences in the collection of data may skew the intended benefits of data driven medical treatment innovations.

As medical technology advances and the potential for commercial involvement grows, it is important to protect professional and clinical independence.

 

RECOMMENDATIONS

Beneficence

  1. The use of medical technology should have as its primary goal benefit for patients’ health and well-being. Medical technology should be based on sound scientific evidence and appropriate clinical expertise. Foreseeable risks and any increase in costs should be weighed against the anticipated benefits for the individual as well as for society, and medical technology should be tested or applied only if the anticipated benefits justify the risks.

Confidentiality and privacy

  1. Protecting confidentiality and respecting patient privacy are central tenets of medical ethics and must be respected in all uses of medical technology.

Patient autonomy

  1. The use of medical technology must respect patient autonomy, including the right of patients to make informed decisions about their health care and control access to their personal information. Patients must be given the necessary information to evaluate the potential benefits and risks involved, including those generated by the use of medical technology.

Justice

  1. To ensure informed choices and avoid bias or discrimination, the basis and impact of medical technology on medical decisions and patient outcomes should be transparent to patients and physicians. In support of fair and equitable provision of health care, the benefits of medical technology should be available to all patients and prioritized based upon clinical need and not on the ability to pay.

Human rights

  1. Medical technology must never be used to violate human rights, such as use in discriminatory practices, political persecution or violation of privacy.

Professional independence

  1. To guarantee professional and clinical independence, physicians must strive to maintain and update their expertise and skills, i.e., by developing the necessary proficiency with medical technology. Medical curricula for students and trainees as well as continuing education opportunities for physicians must be updated to meet these needs. Physicians shall be included in contributions to research and development. Physicians shall remain the expert during shared decision making and not be replaced by medical technology.
  2. Health care institutions and the medical profession should:
  • help ensure that innovative practices or technologies that are made available to physicians meet the highest standards for scientifically sound design and clinical value;
  • require that physicians who adopt innovations into their practice have relevant knowledge and skills;
  • provide meaningful professional oversight of innovation in patient care;
  • encourage physician-innovators to collect and share information about the resources needed to implement their innovations safely, effectively, and equitably; and
  • assure that medical technologies are applied and maintained appropriately in accordance with their intended purpose.
  1. The relevance of these general principles is stated in detail in several existing WMA policies. Of particular importance are:
  1. The WMA encourages all relevant stakeholders to embody the ethics guidance provided by these documents.