Adopted by the 70th WMA General Assembly, Tbilisi, Georgia, October 2019

 

PREAMBLE

Artificial Intelligence (AI) is the ability of a machine to simulate intelligent behavior, a quality that enables an entity to function appropriately and with foresight in its environment. The term AI covers a range of methods, techniques and systems. Common examples of AI systems include, but are not limited to, natural language processing (NLP), computer vision and machine learning. In health care, as in other sectors, AI solutions may include a combination of these systems and methods.

(Note: A glossary of terms appears as an appendix to this statement.)

In health care, a more appropriate term is “augmented intelligence”, an alternative conceptualization that more accurately reflects the purpose of such systems because they are intended to coexist with human decision-making (1). Therefore, in the remainder of this statement, AI refers to augmented intelligence.

An AI system utilizing machine learning employs an algorithm programmed to learn (“learner algorithm”) from data referred to as “training data.” The learner algorithm will then automatically adjust the machine learning model based on the training data. A “continuous learning system” updates the model without human oversight as new data is presented, whereas “locked learners” will not automatically update the model with new data. In health care, it is important to know whether the learner algorithm is eventually locked or whether the learner algorithm continues to learn once deployed into clinical practice in order to assess the systems for quality, safety, and bias. Being able to trace the source of training data is critical to understanding the risk associated with applying a health care AI system to individuals whose personal characteristics are significantly different than those in the training data set.

Health care AI generally describes methods, tools and solutions whose applications are focused on health care settings and patient care. In addition to clinical applications, there are many other applications of AI systems in health care including business operations, research, health care administration, and population health.

The concepts of AI and machine learning have quickly become attractive to health care organizations, but there is often no clear definition of terminology used. Many see AI as a technological panacea; however, realizing the promise of AI may have its challenges, since it might be hampered by evolving regulatory oversight to ensure safety and clinical efficacy, lack of widely accepted standards, liability issues, need for clear laws and regulations governing data uses, and a lack of shared understanding of terminology and definitions.

Some of the most promising uses for health care AI systems include predictive analytics, precision medicine, diagnostic imaging of diseases, and clinical decision support. Development in these areas is underway, and investments in AI have grown over the past several years [1]. Currently, health care AI systems have started to provide value in the realm of pattern recognition, NLP, and deep learning. Machine learning systems are designed to identify data errors without perpetuating them. However, health care AI systems do not replace the need for the patient-physician relationship. Such systems augment physician-provided medical care and do not replace it.

Health care AI systems must be, transparent, reproducible, and be trusted by both health care providers and patients. Systems must focus on users’ needs. Usability should be tested by participants who reflect similar needs and practice patterns of the end user, and systems must work effectively with people. Physicians will be more likely to accept AI systems that can be integrated into or improve their existing practice patterns, and also improve patient care.

Opportunities

Health care AI can offer a transformative set of tools to physicians and patients and has the potential to make health care safer and more efficient. By automating hospital and office processes, physician productivity would improve. The use of data mining to produce accurate useful data at the right time may improve electronic health records. and access to relevant patient information. Results of data mining may also provide evidence for trends that may serve to inform resource allocation and utilization decisions. New insights into diagnosis and best practices for treatment may be produced because of analyzing all known data about a patient. The potential also exists to improve the patient experience, patient safety, and treatment adherence.

Applications of health care AI to medical education include continuing medical education, training simulations, learning assistance, coaching for medical students and residents, and may provide objective assessment tools to evaluate competencies. These applications would help customize the medical education experience and facilitate independent individual or group learning.

There are a number of stakeholders and policy makers involved in shaping the evolution of AI in health care besides physicians. These include medical associations, businesses, governments, and those in the technology industry. Physicians have an unprecedented opportunity to positively inform and influence the discussions and debates currently taking place around AI. Physicians should proactively engage in these conversations in order to ensure that their perspectives are heard and incorporated into this rapidly developing technology.

Challenges

Developers and regulators of health care AI systems must ensure proper disclosure and note the benefits, limitations, and scope of appropriate use of such systems.  In turn, physicians will need to understand AI methods and systems in order to rely upon clinical recommendations. Instruction in the opportunities and limitations of health care AI systems must take place both with medical students and practicing physicians, as physician involvement is critical to successful evolution of the field. AI systems must always adhere to professional values and ethics of the medical profession.

Protecting confidentiality, control and ownership of patient data is a central tenet of the patient-physician relationship. Anonymization of data does not provide enough protection to a patient’s information when machine-learning algorithms can identify an individual from among large complex data sets when provided with as few as three data points, which could put patient data privacy at risk. Current expectations patients have for confidentiality of their personal information must be addressed, and new models that include consent and data stewardship developed.  Viable technical solutions to mitigate these risks are being explored and will be critical to widespread adoption of health care AI systems.

Data structure, and integrity are major challenges that need to be addressed when designing health care AI systems. The data sets on which machine learning systems are trained are created by humans and may reflect bias and contain errors. Because of this, these data sets will normalize errors and the biases inherent in their data sets. Minorities may be disadvantaged because there is less data available about minority populations. Another design consideration is how a model will be evaluated for accuracy and involves very careful analysis of the training data set and its relationship to the data set used to evaluate the algorithms.

Liability concerns present significant challenges to adoption. As existing and new oversight models develop health care AI systems, the developers of such systems will typically have the most knowledge of risks and be best positioned to mitigate the risk. As a result, developers of health care AI systems and those who mandate use of such systems must be accountable and liable for adverse events resulting from malfunction(s) or inaccuracy in output. Physicians are often frustrated with the usability of electronic health records. Systems designed to support team-based care and other workflow patterns but often fall short. In addition to human factors in the design and development of health care AI systems, significant consideration must be given to appropriate system deployment. Not every system can be deployed to every setting due to data source variations.

Work is already underway to advance governance and oversight of health care AI, including standards for medical care, intellectual property rights, certification procedures or government regulation, and ethical and legal considerations.

 

RECOMMENDATIONS

That the WMA:

  • Recognize the potential for improving patient outcomes and physicians’ professional satisfaction through the use of health care AI, provided they conform to the principles of medical ethics, confidentiality of patient data, and non-discrimination.
  • Support the process of setting priorities for health care AI.
  • Encourage the review of medical curricula and educational opportunities for patients, physicians, medical students, health administrators and other health care professionals to promote greater understanding of the many aspects, both positive and negative, of health care AI.

The WMA urges its member organizations to:

  • Find opportunities to bring the practicing physician’s perspective to the development, design, validation and implementation of health care AI.
  • Advocate for direct physician involvement in the development and management of health care AI and appropriate government and professional oversight for safe, effective, equitable, ethical, and accessible AI products and services.
  • Advocate that all healthcare AI systems be transparent, reproducible, and be trusted by both health care providers and patients.
  • Advocate for the primacy of the patient-physician relationship when developing and implementing health care AI systems. 

 

(1) For purposes of this statement, the term “health care AI” will be used to refer to systems that augment, not replace, the work of clinicians.

 

APPENDIX: GLOSSARY OF TERMS USED IN HEALTH CARE AUGMENTED INTELLIGENCE

Algorithm is a set of detailed, ordered instructions that are followed by a computer to solve a mathematical problem or to complete a computer process.

Artificial intelligence consists of a host of computational methods used to produce systems that perform tasks which exhibit intelligent behavior that is indistinguishable from human behavior.

Augmented intelligence (AI) is a conceptualization of artificial intelligence that focuses on artificial intelligence’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.

Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos and seeks to automate tasks that the human visual system can do.

Data mining is an interdisciplinary subfield of computer science and statistics whose overall goal is to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use.

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform specific tasks with minimal human interaction and without using explicit instructions, by learning from data and identification of patterns.

Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Training data is used to train an algorithm; it generally consists of a certain percentage of an overall dataset along with a testing set. As a rule, the better the training data, the better the algorithm performs. Once an algorithm is trained on a training set, it’s usually evaluated on a test set. The training set should be labelled or enriched to increase an algorithm’s confidence and accuracy.

 

Reference

[1] CB Insights. The Race for AI: Google, Baidu, Intel, Apple in a Rush to Grab Artificial Intelligence Startups. https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/.

Adopted by the 58th WMA General Assembly, Copenhagen, Denmark, October 2007
amended by the
 69th WMA General Assembly, Reykjavik, Iceland, October 2018
and rescinded and archived by the 73rd WMA General Assembly, Berlin, Germany, October 2022

DEFINITION

Telemedicine is the practice of medicine over a distance, in which interventions, diagnoses, therapeutic decisions, and subsequent treatment recommendations are based on patient data, documents and other information transmitted through telecommunication systems.

Telemedicine can take place between a physician and a patient or between two or more physicians including other healthcare professionals.

PREAMBLE 

  • The development and implementation of information and communication technology are creating new and different ways for of practicing medicine. Telemedicine is used for patients who cannot see an appropriate physician timeously because of inaccessibility due to distance, physical disability, employment, family commitments (including caring for others), patients’ cost and physician schedules. It has capacity to reach patients with limited access to medical assistance and have potential to improve health care.
  • Face-to-face consultation between physician and patient remains the gold standard of clinical care.
  • The delivery of telemedicine services must be consistent with in-person services and supported by evidence.
  • The principles of medical ethics that are mandatory for the profession must also be respected in the practice of telemedicine.

PRINCIPLES

Physicians must respect the following ethical guidelines when practicing telemedicine:

1. The patient-physician relationship should be based on a personal examination and sufficient knowledge of the patient’s medical history. Telemedicine should be employed primarily in situations in which a physician cannot be physically present within a safe and acceptable time period. It could also be used in management of chronic conditions or follow-up after initial treatment where it has been proven to be safe and effective.

2. The patient-physician relationship must be based on mutual trust and respect. It is therefore essential that the physician and patient be able to identify each other reliably when telemedicine is employed. In case of consultation between two or more professionals within or between different jurisdictions, the primary physician remains responsible for the care and coordination of the patient with the distant medical team.

3. The physician must aim to ensure that patient confidentiality, privacy and data integrity are not compromised. Data obtained during a telemedicine consultation must be secured to prevent unauthorized access and breaches of identifiable patient information through appropriate and up to date security measures in accordance with local legislation. Electronic transmission of information must also be safeguarded against unauthorized access.

4. Proper informed consent requires that all necessary information regarding the distinctive features of telemedicine visit be explained fully to patients including, but not limited to:

  • explaining how telemedicine works,
  • how to schedule appointments,
  • privacy concerns,
  • the possibility of technological failure including confidentiality breaches,
  • protocols for contact during virtual visits,
  • prescribing policies and coordinating care with other health professionals in a clear and understandable manner, without influencing the patient’s choices.

5. Physicians must be aware that certain telemedicine technologies could be unaffordable to patients and hence impede access. Inequitable access to telemedicine can further widen the health outcomes gap between the poor and the rich.

Autonomy and privacy of the Physician

6. A physician should not to participate in telemedicine if it violates the legal or ethical framework of the country.

7. Telemedicine can potentially infringe on the physician privacy due to 24/7 virtual availability. The physician needs to inform patients about availability and recommend services such as emergency when inaccessible.

8. The physician should exercise their professional autonomy in deciding whether a telemedicine versus face-to-face consultation is appropriate.

9. A physician should exercise autonomy and discretion in selecting the telemedicine platform to be used.

Responsibilities of the Physician

10. A physician whose advice is sought through the use of telemedicine should keep a detailed record of the advice he/she delivers as well as the information he/she received and on which the advice was based in order to ensure traceability.

11. If a decision is made to use telemedicine it is necessary to ensure that the users (patients and healthcare professionals) are able to use the necessary telecommunication system.

12. The physician must seek to ensure that the patient has understood the advice and treatment suggestions given and take steps in so far as possible to promote continuity of care.

13. The physician asking for another physician’s advice or second opinion remains responsible for treatment and other decisions and recommendations given to the patient.

14. The physician should be aware of and respect the special difficulties and uncertainties that may arise when he/she is in contact with the patient through means of tele-communication. A physician must be prepared to recommend direct patient-doctor contact when he/she believes it is in the patient’s best interests.

15. Physicians should only practise telemedicine in countries/jurisdictions where they are licenced to practise. Cross-jurisdiction consultations should only be allowed between two physicians.

16. Physicians should ensure that their medical indemnity cover include cover for telemedicine.

Quality of Care

17. Healthcare quality assessment measures must be used regularly to ensure patient security and the best possible diagnostic and treatment practices during telemedicine procedures. The delivery of telemedicine services must follow evidence-based practice guidelines to the degree they are available, to ensure patient safety, quality of care and positive health outcomes. Like all health care interventions, telemedicine must be tested for its effectiveness, efficiency, safety, feasibility and cost-effectiveness.

18. The possibilities and weaknesses of telemedicine in emergencies must be duly identified. If it is necessary to use telemedicine in an emergency situation, the advice and treatment suggestions are influenced by the severity of the patient´s medical condition and the competency of the persons who are with the patient. Entities that deliver telemedicine services must establish protocols for referrals for emergency services.

RECOMMENDATIONS

  1. Telemedicine should be appropriately adapted to local regulatory frameworks, which may include licencing of telemedicine platforms in the best interest of patients.
  2. Where appropriate the WMA and National Medical Associations should encourage the development of ethical norms, practice guidelines, national legislation and international agreements on subjects related to the practice of telemedicine, while protecting the patient-physician relationship, confidentiality, and quality of medical care.
  3. Telemedicine should not be viewed as equal to face-to-face healthcare and should not be introduced solely to cut costs or as a perverse incentive to over-service and increase earnings for physicians.
  4. Use of telemedicine requires the profession to explicitly identify and manage adverse consequences on collegial relationships and referral patterns.
  5. New technologies and styles of practice integration may require new guidelines and standards.
  6. Physicians should lobby for ethical telemedicine practices that are in the best interests of patients.