The Australian Nursing and Midwifery Federation (ANMF) recently made a submission to the Australian Government’s Senate Select Committee on Adopting Artificial Intelligence (AI).
The Committee sought evidence on opportunities and impacts for Australia arising out of the uptake of AI technologies. This included consideration of recent national and international trends and opportunities, potential benefits and risks, and approaches to mitigating harms. The submission was underpinned by the work of the ANMF Federal Office’s National Policy Research Unit and also led to Federal Secretary Annie Butler and Associate Professor Micah Peters appearing at a hearing of the Senate Committee on the 17th of July.
While the adoption of AI might seem beyond the traditional purview of the ANMF, we recognise that AI has significant potential to revolutionise and benefit the way many industries operate and how people work. This extends to almost every facet of healthcare from screening and diagnosis and assistance with documentation through to identifying and alerting staff about patients’ risk of adverse outcomes. While the potential benefits of AI are extensive and could help many clinicians with growing workloads as well as enhance and support better patient outcomes and experiences, as with all innovations there are also many potential risks if AI’s penetration into healthcare is not carefully understood, planned, managed, and overseen.
The ANMF recognises that healthcare workers, particularly nurses and midwives who are often community members’ first point of contact, are already and will increasingly use AI technologies in everyday care delivery. Artificial Intelligence’s potential uses in the healthcare setting are numerous, including but not limited to streamlining repetitive tasks that can be automated allowing for more clinician-to-patient time, enhancing diagnostics, remote health monitoring, health chatbots, drug discovery and development, treatment planning, risk stratification and triaging, and education. In our submission, the ANMF expressed a generally positive opinion towards the potential of AI in healthcare, however, cautioned that its adoption, as with all new technologies and innovations, is extremely complex and must be carefully overseen and monitored. Risks can arise in the use of AI from multiple sources including based on the way that AI has been developed and ‘trained’ on existing data sets and from overreliance on AI for decision-making in place of human judgement, experience, and knowledge.
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” – The First Law of Robotics – Isaac Asimov (I-Robot)
The submission outlined a range of concerns and considerations that must be addressed prior to the wider implementation of AI across all industries. These concerns included concerns around data safety and privacy with the potential of data breaches to expose private confidential patient information, risks of bias in AI diagnostic models and algorithms that can perpetuate harm for already vulnerable groups including ethnic and racial minorities and sexually diverse patients, and concerns that AI chatbots may be used to gatekeep access to human practitioners and lead to unplanned dehumanisation of healthcare.
Related to, but not directly pertaining to healthcare delivery, the ANMF also highlighted potential risks in the use of AI in the education sector. While the emergence and penetration of generative AI into the education sector presents a transformative opportunity especially for simulation and course planning, the ANMF expressed concerns with the potential for AI – particularly generative AI – to be misused which might reduce the quality of training and education and impact on student outcomes and knowledge acquisition. This could impact the preparation of the current and future nursing and midwifery workforce and might in turn, affect professional practice as well as the safety, health, and wellbeing of patients. Here the ANMF emphasised the need to support adoption of AI with education and training for educators and students to ensure ethical and appropriate use.
Chief among the ANMF’s concerns with AI is the potential for the creation of workforce redundancies and the dehumanisation of healthcare. The ANMF is steadfast in its view that AI should never be used as a substitute for human-provided care, but rather be a tool for the workforce to utilise in the delivery of person-centred care. The human element of healthcare (such as responsiveness, assurance, courtesy, empathy, communication, and understanding) has long been recognised as immensely important for the delivery of care, and the removal of such elements is grossly inappropriate and poses serious potential for harm. Here the ANMF advised that AI must be carefully, equitably, and safely integrated into healthcare systems in ways that supports the workforce and enhances healthcare delivery in terms of effectiveness, safety, cost-effectiveness, and appropriateness – without dehumanising the sector.
The ANMF advised the Select Committee that safeguarding measures to ensure systems are designed and maintained to rigorous national and international standards must be developed in consultation with consumers and key stakeholders to ensure safe, controlled, and effective adoption and integration of AI into healthcare and wider systems in a way that equitably benefits the Australian community.
The full submission is available here.
Jarrod Clarke is a Research Assistant in the ANMF National Policy Research Unit (Federal Office) based in the Rosemary Bryant AO Research Centre, Clinical and Health Sciences, University of South Australia.
Associate Professor Micah DJ Peters is the Director of the ANMF National Policy Research Unit (Federal Office) based in the Rosemary Bryant AO Research Centre, Clinical and Health Sciences, University of South Australia.
2 Responses
As an EEN & HCW I believe AI is a helpful assistant out in the field. However it does not have the 8 senses one develops overtime. The human energy can be read by another human.
I am a great advocate to learn by doing, post a block of studies. One is always studying and learning as there is constant change in medicine and your client/patient.
As the world speeds up, which I believe is a space for more errors, AI can alert the nurse on valuable information on the specific client to make better outcomes for all partners concerned.
I believe AI can be great companions for those suffering mental health, isolation events. I still wonder if a monitor will be needed, in particular with potential software intrusions from scammers.
Hi Kay,
Thanks for your comment. We certainly see the potential in AI for great benefit, but as with all technologies, there are also possibilities for harm and poor outcomes. The emergence of AI in healthcare education, workplaces, and nursing is already happening and will continue. So as long as implementation is carefully planned, monitored, evaluated, and guided by appropriate policy that focuses on safety, benefit, and equitability AI can hopefully contribute to and support health care professionals and staff in ways that enhances their practice and working lives.