Close
Artificial Intelligence (AI) technology is rapidly becoming integrated into many areas of healthcare. This guidance explains how existing responsibilities in National Boards’ codes of conduct apply when practitioners use AI in their practice.
This guidance will be updated regularly to reflect new developments in AI and share updates from other regulators.
AI can be defined as ‘computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, and translation between languages’1. Some AI tools available for health practitioners are designed specifically for healthcare and have been developed for a therapeutic purpose, for example, to diagnose and treat patients or clients. Many more are general purpose and are being applied in a healthcare setting. Some professions are increasingly using new AI such as medical scribing tools to support workload management and efficiency in practice to develop or edit documents.
There are different types of AI including machine learning which encompasses generative AI, natural language processing and computer vision. Further information about each type can be found on the frequently asked questions page.
Some AI tools used in healthcare are regulated by the Therapeutics Goods Administration (TGA). The TGA regulates therapeutic goods that meet the definition of a medical device, which includes software (including AI-enabled software) if it has a therapeutic use and meets the definition.
Generative AI tools used in clinical practice such as AI scribing are usually intended for a general purpose and do not have a therapeutic use or meet the definition of a medical device, and therefore are not regulated by the TGA.
Health practitioners can contact the vendor or search the Australian Register of Therapeutic Goods (ARTG) to check if the tools they are using are registered. To find out more about the TGA and its regulation of AI software, see our Further information about AI page.
The potential of AI to transform and support innovation in healthcare has been the subject of much media and professional commentary. Ahpra and National Boards support the safe use of AI in healthcare recognising the significant potential to improve health outcomes and create a more person-centred health system. While the potential of AI to improve health outcomes through improved diagnostics and disease detection has been reported for some time, recent commentary has focussed on the benefits for health practitioners with improved care and patient satisfaction by reducing administrative burdens and health practitioner burnout.
As advancements in AI are rapidly evolving and new tools continue to emerge, its safe use in healthcare involves unique practical and ethical issues. Ahpra and National Boards have identified the following key principles to highlight existing professional obligations that apply when health practitioners use AI in their practice.
This guidance will be regularly reviewed and updated to reflect developments in technology. We have also developed some case studies about the use of newer generative AI tools in practice, and will add case studies focussing on other areas as these are developed.
Key principles for health practitioners to consider to ensure they are meeting professional obligations when using AI in practice include:
Regardless of what technology is used in providing healthcare the practitioner remains responsible for delivering safe and quality care and for ensuring their own practice meets the professional obligations set out in their Code of Conduct. Practitioners must apply human judgment to any output of AI. TGA approval of a tool does not change a practitioner’s responsibility to apply human oversight and judgment to their use of AI, and all tools/software should be tested by the user/organisation to ensure they are fit-for-purpose prior to its use in clinical practice. If using an AI scribing tool, the practitioner is responsible for checking the accuracy and relevance of records created using generative AI.
Health practitioners using AI in their practice need to understand enough about the AI tool to use it safely and in a way that meets their professional obligations. At a minimum, the practitioner should review the product information about an AI tool including how it’s trained and tested on populations, intended use, and limitations and clinical contexts where it should not be used. Understanding the ‘intended use’ of an AI tool is particularly important, as this will inform a practitioner’s consideration of when it is appropriate to use the content /imaging generated by the AI and the associated risks and limitations including diagnostic accuracy, data privacy, and ethical considerations. It is also important to understand how the data is being used to retrain the AI, where data is located and how it is stored.
Health practitioners should inform patients and clients about their use of AI and consider any concerns raised. The level of information a health practitioner needs to provide will depend on how and when AI is being used. For example, if AI is being used as part of software to improve the accuracy of interpreting diagnostic images, the practitioner would not be expected to provide technical detail about how the software works. However, if a practitioner is using an AI tool to record consultations, they would need to provide more information about how the AI works and may impact the patient in terms of its collection and use of their personal information (for example, if public generative AI software is used personal information becomes public domain).
Health practitioners need to involve patients in the decision to use AI tools that require input of their personal patient data and if a patient’s data is required for care (i.e via a recommended diagnostic device). Make sure you obtain informed consent from your patient, and ideally note the patient’s response in the health record. If using an AI scribing tool that uses generative AI, this will generally require input of personal data and therefore require informed consent from your patient/client. Informed consent is particularly important in AI models that record private conversations (consultations) as there may be criminal implications if consent is not obtained before recording, and the AI transcription software should include an explicit consent requirement as an initial step before proceeding.
Other professional obligations in each Board’s Code of Conduct or equivalent that are relevant to the use of AI in practice include:
1 https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095426960