Is AI the final SMSF frontier?
The SMSF industry relies heavily on technology for operational efficiency, minimising risk, and ensuring compliance. But is the development of AI the final SMSF frontier for how we will use technology in the future?
As the new kid on the block, AI has received mixed reviews that range from frustration, scepticism and distrust to curiosity and awe.
But it has not stopped AI from emerging as a game changer for those ready to embrace the benefits of automation, predictive analytics, data protection and cybersecurity.
What is AI?
ChatGPT defines AI as the “simulation of human intelligence processes by machines, especially computer systems”. The ultimate goal is to perform tasks that require human intelligence through various systems.
While machine learning, deep learning, natural language processing, computer vision and robotics are now in use, the problem is the expectation that AI can fully replicate or exceed human intelligence.
There is still a long way to go to achieve artificial general intelligence (the same level as a human), with work also continuing towards the hypothetical concept of artificial superintelligence (that can outperform human intelligence).
The jury is still out on whether either of these AI concepts will ever become a reality.
On a recent episode of The SMSF Experts Podcast, Jeevan Tokhi, head of Simple Fund 360 for BGL, reminded us that AI is only a tool – not a magic box.
Mr Tokhi also identified the risk of accepting an AI response without understanding the basis of that decision, which could lead to a deterioration of knowledge in the industry.
Ethical concerns
One of the ethical concerns is that a lack of human oversight, intervention and system limitations can result in bias.
ChatGPT, by way of example, uses training data up until 2022, limiting the compliance of its responses beyond that date. All generative AI products, including Bard, Bing AI and Chatsonic, come with other limitations such as providing broad generalisations and requiring fact-checks.
Where the human touch is absent, can we be assured that the data is error-free, unbiased, risk-free and transparent? Or will we see the universal acceptance of AI systems without any balance, understanding or integrity?
Could AI replace original thought leadership and decision-making so that any future development of the SMSF industry becomes redundant?
While there are more questions than answers, the concern is that the pool of training data used by AI platforms does limit responses, making it the final SMSF frontier.
The classification of technology as an enabler, not a replacement, has never been more relevant.
Legislation and regulation
One of the biggest concerns is where information generated from AI systems fails to comply with the SIS rules and regulations, professional and regulatory body requirements and standards.
While the SMSF industry has always been an early adopter of technology, the regulators and professional bodies lag way behind in determining relevant technology policies.
By way of example, data feeds have been in operation for over ten years but were not mentioned in GS009 Auditing Self-Managed Superannuation Funds until June 2020.
Does this gap mean some firms could unintentionally operate on the wrong side of existing laws?
Regardless of the pace at which technology moves, it is an SMSF professional's job to minimise risk and ensure that the SIS legislation, auditing standards and APES standards are always at the front of mind and not rely solely on automated technology.
SMSF auditors, for example, must use more hands-on, rigorous testing on complex investments where technology falls short.
Unfortunately, no system can do it all, regardless of the marketing hype. Any auditor who believes it can has not planned or performed the audit with professional scepticism, a requirement under the auditing standards.
With technology working groups now in place, guidance documents have been released by the AUASB and the APESB proposing technology-related amendments to APES 110.
Setting policies for generative AI platforms will ensure the responsible, safe and ethical use of AI around privacy, transparency, accountability and potential biases that can impact users and influence decision-making processes.
Data security
Privacy and data security remain the most significant risks in the SMSF industry, with Australians losing more than $3 billion to scammers in 2022.
As long as AI remains unregulated, it can target Australians through sophisticated phishing attacks, AI-generated family members' voices, automated scam calls, sending fraudulent messages, and hacking into databases (such as Medibank and Optus).
Large-scale data breaches can also lead to the risk of identity fraud and scams.
There are currently several recommendations before the government that could result in new privacy laws where Australians gain greater control of their personal information, including the ability to opt out of targeted ads, erase their data and sue for serious privacy breaches.
From an SMSF point of view, professionals are responsible under the APES standards to ensure they have a system of quality management in place, customised to the firm's technological resources.
It requires controlling risk and putting quantitative and qualitative measures in place to ensure the confidentiality of client information.
There is still a long way to go.
The future
SMSF professionals already use AI to automate certain aspects of their working lives. While there are significant benefits to cyber security, education and training, tailored financial planning and portfolio management, understanding and navigating complex client relationships will be crucial.
The reality is that technology saves time and reduces errors, enabling a higher quality of client service by cultivating a more positive, prosperous relationship and building trust.
Conclusion
AI will require SMSF professionals to learn and develop new skill sets to remain relevant, have a viable business model and survive.
Given the pace of technological change in the SMSF industry, we must be digitally fit, comply with the relevant professional standards and legislation and be smarter risk-takers as AI continues to develop and become mainstream.