Adopted by the 70th WMA General Assembly, Tbilisi, Georgia, October 2019

 

PREAMBLE

Artificial Intelligence (AI) is the ability of a machine to simulate intelligent behavior, a quality that enables an entity to function appropriately and with foresight in its environment. The term AI covers a range of methods, techniques and systems. Common examples of AI systems include, but are not limited to, natural language processing (NLP), computer vision and machine learning. In health care, as in other sectors, AI solutions may include a combination of these systems and methods.

(Note: A glossary of terms appears as an appendix to this statement.)

In health care, a more appropriate term is “augmented intelligence”, an alternative conceptualization that more accurately reflects the purpose of such systems because they are intended to coexist with human decision-making (1). Therefore, in the remainder of this statement, AI refers to augmented intelligence.

An AI system utilizing machine learning employs an algorithm programmed to learn (“learner algorithm”) from data referred to as “training data.” The learner algorithm will then automatically adjust the machine learning model based on the training data. A “continuous learning system” updates the model without human oversight as new data is presented, whereas “locked learners” will not automatically update the model with new data. In health care, it is important to know whether the learner algorithm is eventually locked or whether the learner algorithm continues to learn once deployed into clinical practice in order to assess the systems for quality, safety, and bias. Being able to trace the source of training data is critical to understanding the risk associated with applying a health care AI system to individuals whose personal characteristics are significantly different than those in the training data set.

Health care AI generally describes methods, tools and solutions whose applications are focused on health care settings and patient care. In addition to clinical applications, there are many other applications of AI systems in health care including business operations, research, health care administration, and population health.

The concepts of AI and machine learning have quickly become attractive to health care organizations, but there is often no clear definition of terminology used. Many see AI as a technological panacea; however, realizing the promise of AI may have its challenges, since it might be hampered by evolving regulatory oversight to ensure safety and clinical efficacy, lack of widely accepted standards, liability issues, need for clear laws and regulations governing data uses, and a lack of shared understanding of terminology and definitions.

Some of the most promising uses for health care AI systems include predictive analytics, precision medicine, diagnostic imaging of diseases, and clinical decision support. Development in these areas is underway, and investments in AI have grown over the past several years [1]. Currently, health care AI systems have started to provide value in the realm of pattern recognition, NLP, and deep learning. Machine learning systems are designed to identify data errors without perpetuating them. However, health care AI systems do not replace the need for the patient-physician relationship. Such systems augment physician-provided medical care and do not replace it.

Health care AI systems must be, transparent, reproducible, and be trusted by both health care providers and patients. Systems must focus on users’ needs. Usability should be tested by participants who reflect similar needs and practice patterns of the end user, and systems must work effectively with people. Physicians will be more likely to accept AI systems that can be integrated into or improve their existing practice patterns, and also improve patient care.

Opportunities

Health care AI can offer a transformative set of tools to physicians and patients and has the potential to make health care safer and more efficient. By automating hospital and office processes, physician productivity would improve. The use of data mining to produce accurate useful data at the right time may improve electronic health records. and access to relevant patient information. Results of data mining may also provide evidence for trends that may serve to inform resource allocation and utilization decisions. New insights into diagnosis and best practices for treatment may be produced because of analyzing all known data about a patient. The potential also exists to improve the patient experience, patient safety, and treatment adherence.

Applications of health care AI to medical education include continuing medical education, training simulations, learning assistance, coaching for medical students and residents, and may provide objective assessment tools to evaluate competencies. These applications would help customize the medical education experience and facilitate independent individual or group learning.

There are a number of stakeholders and policy makers involved in shaping the evolution of AI in health care besides physicians. These include medical associations, businesses, governments, and those in the technology industry. Physicians have an unprecedented opportunity to positively inform and influence the discussions and debates currently taking place around AI. Physicians should proactively engage in these conversations in order to ensure that their perspectives are heard and incorporated into this rapidly developing technology.

Challenges

Developers and regulators of health care AI systems must ensure proper disclosure and note the benefits, limitations, and scope of appropriate use of such systems.  In turn, physicians will need to understand AI methods and systems in order to rely upon clinical recommendations. Instruction in the opportunities and limitations of health care AI systems must take place both with medical students and practicing physicians, as physician involvement is critical to successful evolution of the field. AI systems must always adhere to professional values and ethics of the medical profession.

Protecting confidentiality, control and ownership of patient data is a central tenet of the patient-physician relationship. Anonymization of data does not provide enough protection to a patient’s information when machine-learning algorithms can identify an individual from among large complex data sets when provided with as few as three data points, which could put patient data privacy at risk. Current expectations patients have for confidentiality of their personal information must be addressed, and new models that include consent and data stewardship developed.  Viable technical solutions to mitigate these risks are being explored and will be critical to widespread adoption of health care AI systems.

Data structure, and integrity are major challenges that need to be addressed when designing health care AI systems. The data sets on which machine learning systems are trained are created by humans and may reflect bias and contain errors. Because of this, these data sets will normalize errors and the biases inherent in their data sets. Minorities may be disadvantaged because there is less data available about minority populations. Another design consideration is how a model will be evaluated for accuracy and involves very careful analysis of the training data set and its relationship to the data set used to evaluate the algorithms.

Liability concerns present significant challenges to adoption. As existing and new oversight models develop health care AI systems, the developers of such systems will typically have the most knowledge of risks and be best positioned to mitigate the risk. As a result, developers of health care AI systems and those who mandate use of such systems must be accountable and liable for adverse events resulting from malfunction(s) or inaccuracy in output. Physicians are often frustrated with the usability of electronic health records. Systems designed to support team-based care and other workflow patterns but often fall short. In addition to human factors in the design and development of health care AI systems, significant consideration must be given to appropriate system deployment. Not every system can be deployed to every setting due to data source variations.

Work is already underway to advance governance and oversight of health care AI, including standards for medical care, intellectual property rights, certification procedures or government regulation, and ethical and legal considerations.

 

RECOMMENDATIONS

That the WMA:

  • Recognize the potential for improving patient outcomes and physicians’ professional satisfaction through the use of health care AI, provided they conform to the principles of medical ethics, confidentiality of patient data, and non-discrimination.
  • Support the process of setting priorities for health care AI.
  • Encourage the review of medical curricula and educational opportunities for patients, physicians, medical students, health administrators and other health care professionals to promote greater understanding of the many aspects, both positive and negative, of health care AI.

The WMA urges its member organizations to:

  • Find opportunities to bring the practicing physician’s perspective to the development, design, validation and implementation of health care AI.
  • Advocate for direct physician involvement in the development and management of health care AI and appropriate government and professional oversight for safe, effective, equitable, ethical, and accessible AI products and services.
  • Advocate that all healthcare AI systems be transparent, reproducible, and be trusted by both health care providers and patients.
  • Advocate for the primacy of the patient-physician relationship when developing and implementing health care AI systems. 

 

(1) For purposes of this statement, the term “health care AI” will be used to refer to systems that augment, not replace, the work of clinicians.

 

APPENDIX: GLOSSARY OF TERMS USED IN HEALTH CARE AUGMENTED INTELLIGENCE

Algorithm is a set of detailed, ordered instructions that are followed by a computer to solve a mathematical problem or to complete a computer process.

Artificial intelligence consists of a host of computational methods used to produce systems that perform tasks which exhibit intelligent behavior that is indistinguishable from human behavior.

Augmented intelligence (AI) is a conceptualization of artificial intelligence that focuses on artificial intelligence’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.

Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos and seeks to automate tasks that the human visual system can do.

Data mining is an interdisciplinary subfield of computer science and statistics whose overall goal is to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use.

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform specific tasks with minimal human interaction and without using explicit instructions, by learning from data and identification of patterns.

Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Training data is used to train an algorithm; it generally consists of a certain percentage of an overall dataset along with a testing set. As a rule, the better the training data, the better the algorithm performs. Once an algorithm is trained on a training set, it’s usually evaluated on a test set. The training set should be labelled or enriched to increase an algorithm’s confidence and accuracy.

 

Reference

[1] CB Insights. The Race for AI: Google, Baidu, Intel, Apple in a Rush to Grab Artificial Intelligence Startups. https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/.

Adopted by the 53rd WMA General Assembly, Washington, DC, USA, October 2002, and
revised by the 63rd WMA General Assembly, Bangkok, Thailand, October 2012 and
by the 74th
WMA General Assembly, Kigali, Rwanda, October 2023, and
renamed “Declaration of Kigali” by the 75th WMA General Assembly, Helsinki, Finland, October 2024

 

PREAMBLE

Medical technology has come to play a key role in modern medicine. It has helped provide significantly more effective means of prevention, diagnosis, treatment and rehabilitation of illness, for example through the development and use of information technology, such as telehealth, digital platforms and large-scale data collection and analyses, or the use of advanced machinery and software in areas like medical genetics and radiology, including assistive, artificial, and augmented intelligences.

The importance of technology for medical care will continue to grow and the WMA welcomes this progress. The continuous development of medical technologies – and their use in both clinical and research settings – will create enormous benefits for the medical profession, patients, and society.

However, as for all other activities in the medical profession, the use of medical technology for any purpose, must take place within the framework provided by the basic principles of medical ethics as stated in the WMA Declaration of Geneva: The Physician’s Pledge, the International Code of Medical Ethics and the Declaration of Helsinki.

Respect for human dignity and rights, patient autonomy, beneficence, confidentiality, privacy and fairness must be the key guiding points when medical technology is developed and used for medical purposes.

The rapidly developing use of big data has implications for confidentiality and privacy. Using data in ways which would damage patients’ trust in how health services handle confidential data would be counterproductive. This must be borne in mind when introducing new data driven technology. It is essential to preserve high ethical standards and achieve the right balance between protecting confidentiality and using technology to improve patient care.

Additionally, bias through for example social differences in the collection of data may skew the intended benefits of data driven medical treatment innovations.

As medical technology advances and the potential for commercial involvement grows, it is important to protect professional and clinical independence.

 

RECOMMENDATIONS

Beneficence

  1. The use of medical technology should have as its primary goal benefit for patients’ health and well-being. Medical technology should be based on sound scientific evidence and appropriate clinical expertise. Foreseeable risks and any increase in costs should be weighed against the anticipated benefits for the individual as well as for society, and medical technology should be tested or applied only if the anticipated benefits justify the risks.

Confidentiality and privacy

  1. Protecting confidentiality and respecting patient privacy are central tenets of medical ethics and must be respected in all uses of medical technology.

Patient autonomy

  1. The use of medical technology must respect patient autonomy, including the right of patients to make informed decisions about their health care and control access to their personal information. Patients must be given the necessary information to evaluate the potential benefits and risks involved, including those generated by the use of medical technology.

Justice

  1. To ensure informed choices and avoid bias or discrimination, the basis and impact of medical technology on medical decisions and patient outcomes should be transparent to patients and physicians. In support of fair and equitable provision of health care, the benefits of medical technology should be available to all patients and prioritized based upon clinical need and not on the ability to pay.

Human rights

  1. Medical technology must never be used to violate human rights, such as use in discriminatory practices, political persecution or violation of privacy.

Professional independence

  1. To guarantee professional and clinical independence, physicians must strive to maintain and update their expertise and skills, i.e., by developing the necessary proficiency with medical technology. Medical curricula for students and trainees as well as continuing education opportunities for physicians must be updated to meet these needs. Physicians shall be included in contributions to research and development. Physicians shall remain the expert during shared decision making and not be replaced by medical technology.
  2. Health care institutions and the medical profession should:
  • help ensure that innovative practices or technologies that are made available to physicians meet the highest standards for scientifically sound design and clinical value;
  • require that physicians who adopt innovations into their practice have relevant knowledge and skills;
  • provide meaningful professional oversight of innovation in patient care;
  • encourage physician-innovators to collect and share information about the resources needed to implement their innovations safely, effectively, and equitably; and
  • assure that medical technologies are applied and maintained appropriately in accordance with their intended purpose.
  1. The relevance of these general principles is stated in detail in several existing WMA policies. Of particular importance are:
  1. The WMA encourages all relevant stakeholders to embody the ethics guidance provided by these documents.