I recently wrote an article for AccountancyAge that explored why AI guardrails are essential for accounting firms looking to maximise the benefits of the technology.
Over the last few years, I’ve seen significant change in the technology used across the accounting profession. There has been substantial investment in software to support firms, alongside a growing focus on introducing new ways of working that reflect the flexibility people now expect from their employers.
While flexible working practices and information security may seem like very different areas of a business, both require clear structure and policy. Without guidance, it’s impossible to apply consistent standards for security and compliance, just as it’s unworkable to have one set of flexible working rules for one team and another for a different team. Both need a defined policy.
What continues to surprise me is how few accountants recognise the importance of a policy when first adopting AI technologies. Tools like ChatGPT have generated enormous interest, and many people are using them independently – whether to write content or help answer complex questions.
It’s often only when leadership teams pause to consider how AI integrates with existing security systems and processes that the risks of an unstructured approach become clear.
But where do you start? Creating an AI usage policy for your firm
It’s a fair question, and one we’ve helped firms answer using our own experience of introducing AI into our business. The approach acts as a template that can be adapted to suit individual business goals and people strategies.

Four key questions your AI policy should address
- Will the AI tools being considered add value?
- Are the tools secure?
- Have the limitations and risks been weighed against the opportunity and benefits?
- What role can an AI committee play?
Let’s look at each one in turn.
AI that adds value
This is fundamental. AI should never be adopted as a novelty. It’s a powerful technology that can improve productivity, increase efficiency, reduce errors and free up time for more valuable work.
However, as high-profile examples have shown, AI carries risk. It must be trained and used alongside professional judgement. Hallucinations – answers that sound convincing but are incorrect – are not something accounting firms can afford.
An AI policy should therefore define which activities are suitable for AI use. In our own experience, these include content creation, innovation, customer service, data structuring, programming and natural language processing.
Are the tools secure?
Any AI tool under consideration must combine practical value with strong security credentials. Firms need to understand how tools are accessed, who can use them and why. Questions around ownership, maintenance and whether tools can enhance wider security controls should also be addressed.
Your AI policy should clearly document how these considerations are assessed and governed.
What are the opportunities and benefits?
AI can bring many benefits: more productive teams, fewer errors, automation of repetitive tasks and the ability to spot unusual patterns in data. However, it’s easy to focus on these gains without fully considering the risks.
It’s important to reinforce that AI is not infallible. Sharing sensitive customer data, source code or personal information should always be prohibited. Compliance and data protection policies may need to be updated and clearly communicated, including guidance around intellectual property.
Best practice suggests that security teams should oversee any AI tools introduced into the business, even on a trial basis. This helps control risk until the use case and security posture are fully understood.
AI committee
While it may feel excessive early on, a cross-functional AI committee can provide valuable structure. Much like governance groups for IT or diversity initiatives, an AI committee helps ensure policy adherence, supports experimentation within defined guardrails and enables wider rollout once tools prove their value.
Simplicity is key
One of the biggest risks is employees feeling unsure whether their AI use cases will be reviewed fairly or supported. Whatever internal process you adopt, it should be clearly communicated and easy to understand.
At Silverfin, we introduced a simple decision tree to help our teams benefit from AI while managing risk responsibly.
Every organisation’s policy will differ based on culture, size and sector. But AI cannot grow organically without direction. It must be introduced with purpose, pragmatism and guardrails to balance risk and reward and bring people along on the journey.

