Introduction
At the Tax Practitioners Board (TPB), we are committed to the responsible use of Artificial Intelligence (AI) to assist us administer the Tax Agent Services Act 2009 (TASA) more effectively. Our transparency statement outlines how and why we use AI, as well as our approach to monitor and measure its effectiveness to ensure compliance with regulations and to safeguard the public from any adverse impacts.
Our definition of AI
We use AI in technology and data to enhance our analytics and system capabilities for effective decision making. Our definition of AI aligns with the Australian Taxation Office (ATO) and government policy for responsible use of AI, adhering to OECD AI principles. We refer to AI as any application of machine learning on data and the use of system embedded tools with generative AI. Any solutions built on using rules-based conditions are not considered as AI.
Purpose of using AI
AI is used to bring efficiency and improvements to our current processes. We use AI to assist staff in conducting complex analysis and intelligence gathering from large volume of complex data efficiently, which is utilised for risk identification and effective decision makings to support our primary objectives which are to:
- improve our operational effectiveness
- ensure and enhance the integrity of the tax practitioner profession and the tax system
- strengthen our compliance approach by identifying risks and take action to mitigate the risk
- measure the effectiveness of the treatment and compliance activities
- support policy development and insights.
AI governance
We have a well-defined and structured approach to evaluating the performance and effectiveness of AI in our systems. These industry standard methods are implemented to ensure our models function accurately and are fit for purpose. We continuously assess our tools and models against agreed performance metrics, ethical standards and operational goals. These performance metrics include accuracy rate, error rate, precision and recall rate and model / feature drift detection. We regularly run these metrics on models to monitor their performance and update them with feedback for their continuous improvement.
Our governance principles and guidelines for AI policies are designed to ensure that our AI systems are safe, unbiased and fair, and subject to human oversight to minimise any potential negative impact on public. They are also in line with Australian Government AI ethics framework and ATO’s data ethics principles. They comply with the principles and mandatory requirements outlined in Australian Government Policy for responsible use of AI in government (September 2024, version 1.1) for its enablement and engagement. This includes:
- being accountable for implementation with in TPB
- notifying the Digital Transformation Agency (DTA) about new high-risk use case or any changes to the requirements
- being a contact point and engage in whole of government AI forum and processes
- providing ongoing staff training on AI fundamental and additional training for staff as required for their role
- having an up-to-date AI transparency statement available to public for its compliance and measures to monitor effectiveness of deployed AI systems.
We do not use any AI tool which involves direct public interaction to administer a complete decision. We currently have a number of self-service tools on our website such as Qualifications Advisory Service, Public Register and My Profile. These tools allow direct public interaction but are developed with well-established rules to provide information only.
How we use AI
We use AI for the purpose as outlined in classification system for AI use in government. This includes 'Usage patterns' such as 'decision making and administrative actions', 'Analytics for insights' and 'Workplace productivity', to identify compliance, registration and regulatory risk and improve operational efficiencies.
We employ data science and machine learning techniques to develop AI models (predictive and statistical) to identify behaviours which could put the integrity of the tax system and profession at risk. These are then profiled for human verification and further treatment. Additionally, AI is used to develop tailored self-service reporting tools to support the decision-making process. In our registration processes, we use AI to identify and classify complex matters for allocation, prioritisation and quality assurance.
Examples of how we use AI
- We identify compliance risks by reviewing large volume of structured and unstructured data to develop the models to identify underlying behaviours and classify them for human review and verification.
- We identify complex applications and assign them to the relevant team for quick and efficient processing.
- We use AI to analyse and capture insights from bi-annual survey results on free text response to gather the insights on sentiment and trends of respondents. This has improved the process of analysis and decision making which was quite manual and time consuming.
Last modified: 24 June 2025