Australian business and tax software providers’ interest in using Artificial Intelligence applications is growing, according to the Australian Taxation Office.
The ATO regularly works with digital service providers (DSPs) to co-design software, services and other digital products designed to make it easier for businesses and tax professionals to meet reporting obligations. It also provides them with support throughout the software development life cycle.
The ATO does not use Artificial Intelligence in its own digital services, however it told Boss Diaries that Artificial Intelligence had become “a growing topic of interest” in its consultations with DSPs.
“The ATO and DSPs recognise there are challenges in leveraging AI in software, in particular, ensuring any large language model agents (e.g. ChatGPT like functions) are not categorised as providing ‘tax advice’ under the Tax Agent Services Act 2009 (TASA),” an ATO spokesperson said.
AI maturity unclear
The ATO said it was not able to comment on whether the DSP community’s Artificial Intelligence capabilities had reached an advanced stage.
The ATO adheres to six ethical principles for collection, storage and use of citizen data, including its own use of AI – primarily machine learning – for data analytics.
However, the principles don’t appear to cover the use of AI in commercial applications the agency co-designs with DSPs.
Guardrails coming
DSPs planning to use Artificial Intelligence in their rules and logic systems may soon face additional reporting obligations regardless of whether the ATO independently establishes its own guidelines.
The federal government is currently proposing that AI classed as “high-risk”, either due to its intended application or because it’s General-Purpose AI systems that meets a certain capability threshold, be subject to ten mandatory guardrails.
The guardrails are designed to prevent AI disasters before they happen, the federal government says. They include an exhaustive list of checks and balances, transparency measures and other reporting requirements, aiming to promote human safety.
Legal and regulatory challenges
The legal status of advisory products generated using AI present a key challenge for regulators. For instance, in 2022 the US Consumer Financial Protection Bureau (CFPB) took legal action against a finance company that exposed its customers to overdrafts and penalties after it used a faulty algorithm in its savings app.
The CFPB sought remedies to address customer losses and fined the company $US2.7 million.
At the time, CFPB direct Rohit Chopra said: “Companies have long been held to account when they engage in faulty advertising, and regulators must do the same when it comes to faulty algorithms.”