Artificial Intelligence and the Functions of Government: a welcome way forward or cause for concern?

Introduction

On 30 November 2022, Open AI, a then little-known San Francisco based company formed in 2015, released an artificial intelligence (AI) chatbot known as ChatGPT. In short order, it became an ‘overnight’ internet sensation that has generated considerable hype and interest among ‘geeks’ and ‘nerds’, data scientists and first-time users alike, to name but a few. One could be forgiven for thinking the reception afforded ChatGPT heralded the invention of AI. In fact, AI has been with us since the 1950s, when the term first appeared in a paper authored by academic computer scientists.

AI is embedded in our everyday lives, although we may not be conscious of it. AI powers our ATMs, helps us navigate our dealings with telcos, utilities, and other service providers, is integral to the development of driverless cars and is an intrinsic part of ‘virtual assistants’ such as Apple’s Siri, Amazon’s Alexa, and Google Assistant.

AI has long featured in popular culture. Virtual assistants such as J.A.R.V.I.S. (Just A Rather Very Intelligent System) in the Iron Man movies and GRIOT in the Wakanda movies showed us the positive side of AI, while the HAL 9000 computer in 2001: A Space Odyssey, the beguiling humanoid robot named Ava in Ex Machina, the malevolent AI Agent Smith in The Matrix franchise and Skynet, the self-aware artificial neural network hell bent on the destruction of humanity in the Terminator movies, showed us the ‘dark’ side of AI.

The Australian Government too has been using AI for some time to perform its functions and to deliver services to the community. The advent of ChatGPT, however, presents an opportunity to hit the ‘pause button’ and to reflect on the pros and cons of that use.

ChatGPT: An Explanation

ChatGPT is a “large language model” (LLM) machine learning chatbot. It is a neural network that can generate human language text by taking one piece of language and transforming it into what it predicts is the most appropriate following piece of language for the user. It does that by having been ‘trained’ by a team of trainers who adjust the model each time it provides an incorrect answer to a question so that it provides the correct answer the next time the question is asked; hence “GPT”, that stands for “Generative Pre-trained Transformer”.

ChatGPT is built on Open AI’s ChatGPT-3.5 series and is the largest LLM built so far and the best model yet to produce realistic human like text. It was built using a supercomputer designed by Microsoft. It has 175 billion parameters (a computer programming term) compared with ChatGPT-1 that had just 117 million parameters. It has access to 300 billion words. It has a seemingly endless number of uses including answering general questions, writing essays and computer code, translating languages, and creating music and poetry. It should be noted, however, that ChatGPT knowledge is limited to its training data, which is only current to 2021.

The Australian Government’s Focus on AI

Australian Governments of both major political persuasions have expressed support for making Australia a global leader in the development and adoption of trusted, secure, and responsible AI. An AI Action Plan was developed as part of the previous Government’s Digital Economy Strategy and was on the List of Critical Technologies in the National Interest, currently under review.

The focus on AI continues under the current Government. It has announced an investment of $1 billion in the form of a Critical Technology Fund as part of a broader National Reconstruction Fund (as set out in the National Reconstruction Fund Corporation Bill 2022 that at the time of writing is still to be passed by Parliament) that is intended to support home-grown innovation and value creation in areas such as AI, robotics, and quantum computing and on February 2023, Ed Husic, the responsible Minister, announced he was looking to develop a national strategy on AI along similar lines to robotics and quantum computing. The Minister’s vision includes a future where people and machines will work alongside each other as colleagues 1.

How does the Australian Government use AI?

AI can and is used in diverse fields including health, ageing and disability, education, natural resources and environment, infrastructure, transport, and defence.

Some examples of AI use by the Australian Government are as follows:

  • Chatbots and virtual agents are used at agencies’ call centres instead of people. If you visit an ATO web page, you will see a virtual assistant by the name of ‘Alex’ asking you if you need help with your query. The 2019 David Thodey Independent Review of the Australian Public Service found that “…about 40 per cent of APS employee time is currently spent on highly automatable tasks such as data collection and processing. This time could be devoted to higher value roles — including direct customer service.2
  • Some agencies use AI to detect and predict customer fraud and non-compliance. For example, the ATO uses ANGIE (Automated Network & Grouping Identification Engine) to detect suspicious activity within the affairs of wealthy individuals and corporate groups. Services Australia has been working with Trellis Data to train its Fraud and Compliance AI platform to predict non-compliance among 100 of its biggest debtors, with an accuracy of 79.93%. Trellis Data says that implementation of this program could achieve savings of up to $400 million p.a.3
  • Airservices Australia, that provides a range of services to the aviation industry, used Trellis Data’s AI platform to predict whether an aircraft coming into land at an airfield will be required to abort its approach and circle back to try again. The model was deployed in days and was able to accurately predict 72% of the go-arounds that occurred, with an accuracy of over 99%. The AI platform has now been deployed and can be adjusted as required.4
  • Facial recognition, a biometric form of AI, is used by the Australian Border Force to verify the identity of passport holders at ten of Australia’s international airports through an automated process known as ‘SmartGate’, to reduce immigration queues and allow immigration officials to focus their attention on higher risk cases that do not pass the automated process.

Is There A Downside To Using AI?

AI, including ChatGPT, like human beings, has its frailties. Open AI warns us of this; its website says ChatGPT can produce incorrect or nonsensical answers, be excessively verbose and will sometimes respond to harmful instructions or exhibit biased behaviour. After all, AI is only as good as the data sets it has access to.

There are many other risks associated with Government use of AI (which it is aware of) that need to be considered and ameliorated, where possible. Some of these risks and issues to consider are as follows:

  • The use of AI to perform tasks that lend themselves to automation will inevitably, over time, lead to job losses across the board in the APS and for contractors. Will the people affected by this be able to be retrained to work with AI or in other segments of the ICT sector or find other gainful employment?5 Where it is used for HR and other employment and IR matters, there is the risk that the information or documents it produces will offend various workplace laws including the prohibitions against sexual harassment, discrimination, and unfair dismissals.
  • AI may use information that is subject to copyright or infringe a registered patent or design. While the Government may be protected from a breach claim by relying on the ‘Crown Use’ defence, it may be exposed to compensation claims as provided for by the relevant legislation.
  • Most Australian Government agencies 6 are subject to the Privacy Act 1988 and must comply with the 13 Australian Privacy Principles. If those agencies use AI for a particular task and in doing so breach a person’s privacy rights, especially where sensitive information is involved, that agency may be exposed to legal action.
  • If legal advice is sought from an AI and not a lawyer, the legal advice will not be protected from disclosure by legal professional privilege and runs the risk of becoming discoverable in any court proceeding, that may lead to unforeseen and adverse consequences.
  • AI informed automated decision making (ADM) is being used by various Government departments and agencies to improve productivity and service standards. Yet questions about the use of ADM remain: for example, is the decision in line with administrative law principles and is the decision-making process backed by appropriate legislation? Other concerns have been identified such as whether the ADM decision is affected by bias and discrimination as reflected in the data sets it may have accessed, can the ADM decision be explained such that the public can have trust and confidence in the process and how the ADM process applied the exercise of discretion usually reserved for a human decision maker.7

New legislation, regulations, and policy frameworks as well as adherence to the Department of Industry, Science and Resources’ AI Ethics Principles and AI Ethics Framework 8 may well ameliorate the risks associated with this new technology. However, there is little doubt the increasing use of AI has opened a Pandora’s box of issues for the Australian Government to grapple with.

Conclusion

Australia’s Chief Scientist, Dr Catherine Foley, has been reported as saying Australia has not been ready for the new technologies and has suggested her office might be asked to prepare a Rapid Research Information report on the implications of AI 9 . Despite this, the march of AI appears inexorable. It’s been reported it took Spotify five months and Instagram 2.5 months to reach their first million users while ChatGPT did it in five days. In January 2023, it reached a 100 million active users after two months while it took Tik Tok nine months to achieve that.

The big tech companies are clearly wedded to using and developing AI. Microsoft has committed to investing US$10 billion in AI research and development in partnership with Open AI and has used a version of ChatGPT to power its ‘Bing’ chatbot. Google recently released its chatbot called ‘Bard’ as its answer to ChatGPT. Meta, the owner of Facebook, Instagram, and WhatsApp, has a chatbot called ‘Galactica’. As the ‘kinks’10 affecting these technologies are ironed out and they become more sophisticated, they will increasingly become part of the machinery of Government.

Provided we can manage the risks associated with the use of AI, the benefits are seemingly endless. We are indeed entering a brave new world.

Author’s Note: this article was written by a sentient human being and not by ChatGPT or any other form of AI…

 

1 Speech given to the Australian Financial Review’s Workforce Summit on 22 February 2023 reported by Paul Smith, Technology Editor, in the AFR on 22 February 2023

2 Pg 25

3 Trellis Data website at trellisdata.com under the heading ‘Use Cases’ and ‘TIP Fraud and Compliance, Government Services Australia

4 Ibid. under the heading ‘Use Cases’ and ‘TIP Organisation Optimiser, Aviation‘ Airservices Australia

5 In a 2019 Report by CSIRO Data 61, it was predicted that Australian industry would need up to 161,000 new specialist AI workers by 2030.of Facebook, Instagram, and WhatsApp

6 The term ‘agency is defined in section 6 (1) of the Privacy Act 1988

7 See Commonwealth of Australia, “Positioning Australia As A Leader in Digital Economy Regulation – Automated decision making and AI regulation” Issues Paper, March 2022. See also Commonwealth Ombudsman, Automated Decision-making Better Practice Guide 2019 published in March 2020 and Chapter 19, Privacy Act Review, Report 2022

8 In 2019, the Department of Industry, Innovation and Science (now the Department of Industry, Science and Resources) released eight AI Ethics Principles as part of a broader AI Ethics framework. Further information can be accessed at https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework See also an article by Dr Catherine Foley, Why we need to think about diversity and ethics in AI, 12 January 2022, Australian Academy of Technological Sciences & Engineering that can be found at https://www.atse.org.au/news-and-events/article/why-we-need-to-think-about-diversity-and-ethics-in-ai/. See also Microsoft’s Responsible AI Standard, v2 General Requirements June 2022

9 Hans van Leeuwen, Australian Financial Review, 30 January 2023

10 See the articles written by James Vincent, ‘Google’s AI chatbot Bard makes factual error in first demo’, The Verge, 9 February 2023 that can be accessed at https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo and by Kevin Roose, ‘Bing’s A.I. Chat: ‘I Want to Be Alive 😈’’ New York Times, 16 February 2023 that can be accessed at https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html

Stephen Newman

Executive Counsel,
Hope Earle Lawyers

Important Disclaimer - This publication is general in nature and is not intended to be, nor should be, considered as legal advice. For legal advice please contact Hope Earle Lawyers on +61 3 9600 3330.