Australian Health Practitioner Regulation Agency - Meeting your professional obligations when using Artificial Intelligence in healthcare
Look up a health practitioner

Close

Check if your health practitioner is qualified, registered and their current registration status

Meeting your professional obligations when using Artificial Intelligence in healthcare

Artificial Intelligence (AI) technology is rapidly becoming integrated into many areas of healthcare. This guidance explains how existing responsibilities in National Boards’ codes of conduct apply when practitioners use AI in their practice.

This guidance will be updated regularly to reflect new developments in AI and share updates from other regulators.

Artificial intelligence in healthcare

AI can be defined as ‘computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, and translation between languages’1. Some AI tools available for health practitioners are designed specifically for healthcare and have been developed for a therapeutic purpose, for example, to diagnose and treat patients or clients. Many more are general purpose and are being applied in a healthcare setting. Some professions are increasingly using new AI such as medical scribing tools to support workload management and efficiency in practice to develop or edit documents.

There are different types of AI including machine learning which encompasses generative AI, natural language processing and computer vision. Further information about each type can be found on the frequently asked questions page.

How are AI tools regulated in healthcare?

Some AI tools used in healthcare are regulated by the Therapeutics Goods Administration (TGA). The TGA regulates therapeutic goods that meet the definition of a medical device, which includes software (including AI-enabled software) if it has a therapeutic use and meets the definition.

Generative AI tools used in clinical practice such as AI scribing are usually intended for a general purpose and do not have a therapeutic use or meet the definition of a medical device, and therefore are not regulated by the TGA.

Health practitioners can contact the vendor or search the Australian Register of Therapeutic Goods (ARTG) to check if the tools they are using are registered. To find out more about the TGA and its regulation of AI software, see our Further information about AI page.

What are the potential benefits of AI?

The potential of AI to transform and support innovation in healthcare has been the subject of much media and professional commentary. Ahpra and National Boards support the safe use of AI in healthcare recognising the significant potential to improve health outcomes and create a more person-centred health system. While the potential of AI to improve health outcomes through improved diagnostics and disease detection has been reported for some time, recent commentary has focussed on the benefits for health practitioners with improved care and patient satisfaction by reducing administrative burdens and health practitioner burnout.

Meeting your professional obligations - what are the potential challenges of using AI?

As advancements in AI are rapidly evolving and new tools continue to emerge, its safe use in healthcare involves unique practical and ethical issues. Ahpra and National Boards have identified the following key principles to highlight existing professional obligations that apply when health practitioners use AI in their practice.

This guidance will be regularly reviewed and updated to reflect developments in technology. We have also developed some case studies about the use of newer generative AI tools in practice, and will add case studies focussing on other areas as these are developed.  

Key principles for health practitioners to consider to ensure they are meeting professional obligations when using AI in practice include:

Accountability

Regardless of what technology is used in providing healthcare the practitioner remains responsible for delivering safe and quality care and for ensuring their own practice meets the professional obligations set out in their Code of Conduct. Practitioners must apply human judgment to any output of AI. TGA approval of a tool does not change a practitioner’s responsibility to apply human oversight and judgment to their use of AI, and all tools/software should be tested by the user/organisation to ensure they are fit-for-purpose prior to its use in clinical practice. If using an AI scribing tool, the practitioner is responsible for checking the accuracy and relevance of records created using generative AI.

Understanding

Health practitioners using AI in their practice need to understand enough about the AI tool to use it safely and in a way that meets their professional obligations. At a minimum, the practitioner should review the product information about an AI tool including how it’s trained and tested on populations, intended use, and limitations and clinical contexts where it should not be used. Understanding the ‘intended use’ of an AI tool is particularly important, as this will inform a practitioner’s consideration of when it is appropriate to use the content /imaging generated by the AI and the associated risks and limitations including diagnostic accuracy, data privacy, and ethical considerations. It is also important to understand how the data is being used to retrain the AI, where data is located and how it is stored.

Transparency

Health practitioners should inform patients and clients about their use of AI and consider any concerns raised. The level of information a health practitioner needs to provide will depend on how and when AI is being used. For example, if AI is being used as part of software to improve the accuracy of interpreting diagnostic images, the practitioner would not be expected to provide technical detail about how the software works. However, if a practitioner is using an AI tool to record consultations, they would need to provide more information about how the AI works and may impact the patient in terms of its collection and use of their personal information (for example, if public generative AI software is used personal information becomes public domain).

Informed consent

Health practitioners need to involve patients in the decision to use AI tools that require input of their personal patient data and if a patient’s data is required for care (i.e via a recommended diagnostic device). Make sure you obtain informed consent from your patient, and ideally note the patient’s response in the health record. If using an AI scribing tool that uses generative AI, this will generally require input of personal data and therefore require informed consent from your patient/client. Informed consent is particularly important in AI models that record private conversations (consultations) as there may be criminal implications if consent is not obtained before recording, and the AI transcription software should include an explicit consent requirement as an initial step before proceeding.

Ethical and legal issues

Other professional obligations in each Board’s Code of Conduct or equivalent that are relevant to the use of AI in practice include:

  • ensuring confidentiality and privacy of your patient/client as required by privacy and health record legislation (see below), by checking that data is collected, stored used and disclosed in accordance with legal requirements, and your patient’s privacy is not inadvertently breached. Practitioners need to be aware of whether the patient data being used/recorded is also used to train the AI model for future patients, and whether identifiable patient data then finds its way into that learning database.
  • supporting the health and safety of Aboriginal and Torres Strait Islander people and all patients/clients from diverse backgrounds by ensuring you understand the inherent bias that can exist within data and algorithms used in AI applications and only using them when appropriate
  • complying with any relevant legislation and/or regulatory requirements that relate to using AI in practice, including the requirements of the TGA and your state and/or territory governments
  • awareness of the governance arrangements established by your employer, hospital or practice to oversee the implementation, use and monitoring of AI to ensure ongoing safety and performance, including your role and responsibilities, and
  • holding appropriate professional indemnity insurance arrangements for all aspects of your practice and consulting your provider if you’re unsure if AI tools used in your practice are covered.

https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095426960

Print this page
 
 
Page reviewed 22/08/2024