The Intersection of Government and AI: Exploring Accountability
Scroll
By Rhiana Dabboussy
The development of Artificial Intelligence (AI) is accelerating at an exponential rate. Just look at the statistics – they are simply mind boggling.
Did you know that in 2023, AI spending is estimated to have increased by 27% over the course of the year, reaching $154 billion globally?
Technology that was once as basic as asking Siri to find you the nearest café or offering a music recommendation, now has the ability to detect complex fraud, combat the evils of cyber crime, operate machinery and even develop new drugs and treatments.
AI has the power to transform our economies, governments and businesses, and can improve the lives of everyday Australians through the automation of simple processes and services.
However, with AI comes significant challenges. Technology roadblocks, material costs, and scaling of products are just some of the practical challenges associated with wide-scale AI implementation.
In addition to these considerations, ethical concerns are often in question. Privacy breaches, discrimination and bias in algorithmic decision-making, and the opacity of processes can result in flawed AI products and harm to users.
In Australia, the now infamous Robodebt fiasco proved just how harmful AI systems can be. For those unaware, in 2015, as part of its strategy to reduce the fiscal deficit, the coalition government introduced Robodebt.
What was Robodebt? It was a compliance and debt recovery program to claw back overpayments to welfare recipients. The system was notoriously defective, though.
It led to $1.76 billion in unlawful debts being raised against 443,000 Australians over a period of five years. The impact was devastating, with many victims succumbing to financial hardship, anxiety and depression (a quick search on Google reveals just how bad it got).
So, as AI reshapes the ways in which we work, live, and play, there is a crucial question: how can governments remain accountable when implementing AI in the design and delivery of public services?
The OECD.AI Policy Observatory is the leader in advancing anything and everything AI-related. Its adoption of the OECD AI Principles in 2019 emphasises the importance of human-centred values in AI development, and places principles such as fairness, transparency and accountability at the forefront of design and delivery.
This has led to more than 60 countries putting in place AI strategies and policies which recognise the opportunities and risks of AI.
As part of these policies, governments are learning how to make sure their AI systems are safe and trustworthy. One approach being used is algorithmic accountability.
Algorithmic accountability means ‘ensuring that those that build, procure and use algorithms are eventually answerable for their impacts.’ 1
To do this, governments must manage and govern risks of AI, while ensuring that AI design and delivery is both transparent and open. This can be achieved via a multifaceted approach, combining policy and law reform, regulatory supervision and auditing, and centring values based principles in design strategy.
When it comes to law reform, the EU’s Digital Services Act, enacted in July 2022, along with Canada’s draft legislation known as the Artificial Intelligence and Data Act, and the United States’ proposed Algorithmic Accountability Act, all introduce provisions aimed at improving algorithmic transparency primarily within the private sector.
However, they do not provide clear guidance on how public administrations should employ algorithms. While Australia has several pieces of legislation regulating AI usage in specific settings or circumstances, it does not provide for a comprehensive regulatory framework.
On the other hand, the proposed EU Artificial Intelligence Act and the associated EU AI Liability Directive present substantial opportunities for fostering algorithmic accountability within the public sector.
While law reform is a clear step in the right direction, ensuring automated systems adhere to it is quite another.
One approach proposed to simplify and streamline this regulatory enforcement and compliance is through the development of Rules as Code.
This refers to the rewriting of legislation and regulations in machine consumable formats so that they can be responded to and engaged with by computer systems. For example, Rules as Code would involve coded rules being released concurrently with natural language versions.
Rules as Code is already being trialled. The Department of Finance has backed an initiative exploring the feasibility of providing Rules as Code as a collective service to offer more straightforward and personalised digital experiences for citizens.
Meanwhile, neighbours New Zealand are rolling out an ambitious project to help people in need better understand their legal eligibility for government assistance and to access support.
Across the world, independent oversight entities are being used in various ways to enhance the accountability of algorithms.
While some attack the ‘front end’ of AI service design by building rules and instructions around AI, other entities focus on assessing the quality of algorithms, as well as monitoring and supervising AI delivery.
These entities have a critical role to play in keeping the government of the day accountable for AI use.
AI products and services are only as valuable as the design that underpins them. Accordingly, perhaps the most important aspect of algorithm accountability is ensuring that AI design is ethical, fit for purpose, and transparent.
Policy efforts in these areas are promising, but not comprehensive at present.
Trust is the foundation of AI design and delivery. With such significant consequences, it is imperative that governments ensure that technology is secure, transparent and open.
Government must minimise the ‘black box’ by making sure that processes and technologies are exposed and are able to be understood.
This is vital as governments make their way through the age of AI and automation.
1 The Ada Lovelace Institute, AI Now Institute and Open Government Partnership