Introduction
On 30 November 2022, Open AI, a then little-known San Francisco based company formed in 2015, released an artificial intelligence (AI) chatbot known as ChatGPT. In short order, it became an ‘overnight’ internet sensation that has generated considerable hype and interest among ‘geeks’ and ‘nerds’, data scientists and first-time users alike, to name but a few. One could be forgiven for thinking the reception afforded ChatGPT heralded the invention of AI. In fact, AI has been with us since the 1950s, when the term first appeared in a paper authored by academic computer scientists.
AI is embedded in our everyday lives, although we may not be conscious of it. AI powers our ATMs, helps us navigate our dealings with telcos, utilities, and other service providers, is integral to the development of driverless cars and is an intrinsic part of ‘virtual assistants’ such as Apple’s Siri, Amazon’s Alexa, and Google Assistant.
AI has long featured in popular culture. Virtual assistants such as J.A.R.V.I.S. (Just A Rather Very Intelligent System) in the Iron Man movies and GRIOT in the Wakanda movies showed us the positive side of AI, while the HAL 9000 computer in 2001: A Space Odyssey, the beguiling humanoid robot named Ava in Ex Machina, the malevolent AI Agent Smith in The Matrix franchise and Skynet, the self-aware artificial neural network hell bent on the destruction of humanity in the Terminator movies, showed us the ‘dark’ side of AI.
The Australian Government too has been using AI for some time to perform its functions and to deliver services to the community. The advent of ChatGPT, however, presents an opportunity to hit the ‘pause button’ and to reflect on the pros and cons of that use.
ChatGPT: An Explanation
ChatGPT is a “large language model” (LLM) machine learning chatbot. It is a neural network that can generate human language text by taking one piece of language and transforming it into what it predicts is the most appropriate following piece of language for the user. It does that by having been ‘trained’ by a team of trainers who adjust the model each time it provides an incorrect answer to a question so that it provides the correct answer the next time the question is asked; hence “GPT”, that stands for “Generative Pre-trained Transformer”.
ChatGPT is built on Open AI’s ChatGPT-3.5 series and is the largest LLM built so far and the best model yet to produce realistic human like text. It was built using a supercomputer designed by Microsoft. It has 175 billion parameters (a computer programming term) compared with ChatGPT-1 that had just 117 million parameters. It has access to 300 billion words. It has a seemingly endless number of uses including answering general questions, writing essays and computer code, translating languages, and creating music and poetry. It should be noted, however, that ChatGPT knowledge is limited to its training data, which is only current to 2021.
The Australian Government’s Focus on AI
Australian Governments of both major political persuasions have expressed support for making Australia a global leader in the development and adoption of trusted, secure, and responsible AI. An AI Action Plan was developed as part of the previous Government’s Digital Economy Strategy and was on the List of Critical Technologies in the National Interest, currently under review.
The focus on AI continues under the current Government. It has announced an investment of $1 billion in the form of a Critical Technology Fund as part of a broader National Reconstruction Fund (as set out in the National Reconstruction Fund Corporation Bill 2022 that at the time of writing is still to be passed by Parliament) that is intended to support home-grown innovation and value creation in areas such as AI, robotics, and quantum computing and on February 2023, Ed Husic, the responsible Minister, announced he was looking to develop a national strategy on AI along similar lines to robotics and quantum computing. The Minister’s vision includes a future where people and machines will work alongside each other as colleagues 1.
How does the Australian Government use AI?
AI can and is used in diverse fields including health, ageing and disability, education, natural resources and environment, infrastructure, transport, and defence.
Some examples of AI use by the Australian Government are as follows:
Is There A Downside To Using AI?
AI, including ChatGPT, like human beings, has its frailties. Open AI warns us of this; its website says ChatGPT can produce incorrect or nonsensical answers, be excessively verbose and will sometimes respond to harmful instructions or exhibit biased behaviour. After all, AI is only as good as the data sets it has access to.
There are many other risks associated with Government use of AI (which it is aware of) that need to be considered and ameliorated, where possible. Some of these risks and issues to consider are as follows:
New legislation, regulations, and policy frameworks as well as adherence to the Department of Industry, Science and Resources’ AI Ethics Principles and AI Ethics Framework 8 may well ameliorate the risks associated with this new technology. However, there is little doubt the increasing use of AI has opened a Pandora’s box of issues for the Australian Government to grapple with.
Conclusion
Australia’s Chief Scientist, Dr Catherine Foley, has been reported as saying Australia has not been ready for the new technologies and has suggested her office might be asked to prepare a Rapid Research Information report on the implications of AI 9 . Despite this, the march of AI appears inexorable. It’s been reported it took Spotify five months and Instagram 2.5 months to reach their first million users while ChatGPT did it in five days. In January 2023, it reached a 100 million active users after two months while it took Tik Tok nine months to achieve that.
The big tech companies are clearly wedded to using and developing AI. Microsoft has committed to investing US$10 billion in AI research and development in partnership with Open AI and has used a version of ChatGPT to power its ‘Bing’ chatbot. Google recently released its chatbot called ‘Bard’ as its answer to ChatGPT. Meta, the owner of Facebook, Instagram, and WhatsApp, has a chatbot called ‘Galactica’. As the ‘kinks’10 affecting these technologies are ironed out and they become more sophisticated, they will increasingly become part of the machinery of Government.
Provided we can manage the risks associated with the use of AI, the benefits are seemingly endless. We are indeed entering a brave new world.
Author’s Note: this article was written by a sentient human being and not by ChatGPT or any other form of AI…
1 Speech given to the Australian Financial Review’s Workforce Summit on 22 February 2023 reported by Paul Smith, Technology Editor, in the AFR on 22 February 2023
2 Pg 25
3 Trellis Data website at trellisdata.com under the heading ‘Use Cases’ and ‘TIP Fraud and Compliance, Government Services Australia
4 Ibid. under the heading ‘Use Cases’ and ‘TIP Organisation Optimiser, Aviation‘ Airservices Australia
5 In a 2019 Report by CSIRO Data 61, it was predicted that Australian industry would need up to 161,000 new specialist AI workers by 2030.of Facebook, Instagram, and WhatsApp
6 The term ‘agency is defined in section 6 (1) of the Privacy Act 1988
7 See Commonwealth of Australia, “Positioning Australia As A Leader in Digital Economy Regulation – Automated decision making and AI regulation” Issues Paper, March 2022. See also Commonwealth Ombudsman, Automated Decision-making Better Practice Guide 2019 published in March 2020 and Chapter 19, Privacy Act Review, Report 2022
8 In 2019, the Department of Industry, Innovation and Science (now the Department of Industry, Science and Resources) released eight AI Ethics Principles as part of a broader AI Ethics framework. Further information can be accessed at https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework See also an article by Dr Catherine Foley, Why we need to think about diversity and ethics in AI, 12 January 2022, Australian Academy of Technological Sciences & Engineering that can be found at https://www.atse.org.au/news-and-events/article/why-we-need-to-think-about-diversity-and-ethics-in-ai/. See also Microsoft’s Responsible AI Standard, v2 General Requirements June 2022
9 Hans van Leeuwen, Australian Financial Review, 30 January 2023
10 See the articles written by James Vincent, ‘Google’s AI chatbot Bard makes factual error in first demo’, The Verge, 9 February 2023 that can be accessed at https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo and by Kevin Roose, ‘Bing’s A.I. Chat: ‘I Want to Be Alive 😈’’ New York Times, 16 February 2023 that can be accessed at https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
Executive Counsel,
Hope Earle Lawyers