I recently wrote an article for AccountancyAge that delved into why AI guardrails are necessary for accounting firms to maximise the benefits the technology can deliver.
In the last few years, I’ve witnessed great change in the technology employed by the accounting industry. There’s been significant investment in software to help support accounting practices and, in tandem, firms are considering how they introduce better ways of working to suit the flexibility people expect from their employer today.
Though giving people a choice about where and when they work, and upholding information security processes are two very different sides of a business, they both demand structure and policies. Without clear guidance, it’s impossible to be consistent in governing information security and compliance standards, nor can there be one set of flexible working rules for one team and a different set for another. Both require a policy.
Yet, it surprises me that very few accountants see the value in a policy when they first set out on the road to adopting new AI technologies. ChatGPT has caused so much intrigue that people are using it of their own volition – using it to write copy, or help answer complex questions.
Indeed it is only when leaders step back and think about how major AI implementations will integrate with security systems and processes that they start to understand the risk unstructured introduction of AI can bring.
But where do you start? Creating an AI usage policy for your firm
It’s a good question, and one we have helped firms answer by using our own experience of introducing AI to our business. It’s a template that can be adapted to suit the business goals and people strategy.
We generally advise 4 main questions are answered by the policy guidance:
- Will the AI tools being considered add value?
- Are the tools secure?
- Have the limitations and risks been weighed against the opportunity and benefits?
- What role can an AI committee play?
Let’s look at each one in turn.
AI that adds value
This is a fundamental of AI. It isn’t a vanity project. It’s a powerful technology that can be used to improve productivity, efficiency, remove errors and free up time for more lucrative types of work.
However, as stories of ChatGPT have shown, there are risks. AI must be trained, and used in tandem with an enquiring mind that is capable of spotting mistakes. Hallucinations – convincing but incorrect answers – are not something an accounting firm can afford to let into the business.
A policy should therefore detail which types of work qualify for AI adoption. Our own list comprises content creation, innovation, customer service, structuring data, programming and natural language processing.
Are the tools secure?
Any AI tool that hits the shortlist for consideration needs to have the greatest practical appeal with robust security credentials. Think about how the tools are accessed, who has access, and why. Who maintains security? And can they be used to enhance security elsewhere? A good example is the adoption of 1Password per department, which keeps everything visible – there can be no pet projects.
Your AI policy needs to consider and include answers to these challenges.
What are the opportunities and benefits?
There are numerous benefits that can be derived from AI. More productive teams, fewer errors, automation of repetitive tasks, spotting unusual patterns in data and so on. Every business will want to gain from this list, but it’s easy to jump straight in and not consider the pitfalls.
It’s important to remind everyone in the business of the limitations and risks. So for example, whilst AI is clever, it isn’t perfect. Sharing customer and company data like source code and personal information is a no go area.
With this in mind, consider how the compliance and data protection policy needs to adapt and make sure it’s shared and understood. It’s a good idea to look at Intellectual Property when you do this.
Best practice would say that the security team should manage and maintain any AI being brought into the business, even if experimental. They can help ring fence its use until it has proven its value (ie use case is clear and tools are well understood to benefit process improvement) and security credentials.
AI committee
This might seem extreme if you are early into your journey, but just as you might have a steering committee to oversee a new diversity initiative or an IT implementation, so you need a multi-functional committee for bringing AI into the company. Not only does it provide structure and a safety net in terms of policy adherence, but it also helps share ideas, provides guardrails for testing out new tools, and, when proven to be beneficial, provide the go ahead for other teams to adopt without the teething problems.
Simplicity is key
The greatest threat is employees thinking their use cases won’t be carefully considered and guidance provided. So, whatever the internal review process, make sure everyone knows about it and can see how risks and rewards are considered in a timely fashion.
We introduced a decision tree that we use internally at Silverfin to help our team enjoy the benefits AI offers while managing any risks.
Every policy will be individual to the company, its culture, size and sector, but there’s no doubt that AI can’t grow organically in a business. It must be introduced with strategic aims, pragmatism and guardrails. It’s the best way to maintain the balance between risk and reward and take everyone with you on the journey.