In the first official episode of the Silver Linings podcast series, Phil sits down with Claus Dahl, mathematician and AI expert from Visma, to discuss the real-world impact of AI. Claus explores its biggest opportunities, potential risks and how it could level the playing field for smaller businesses.
This episode offers a grounded perspective on what AI means for the future of work – without the robot takeover fears.
Read the blog to unlock the key takeaways from Claus’ conversation
Transcript
Intro: What do you see as the biggest opportunity for AI across accounting, but also maybe even in wider industries today? AI is increasingly shaping the internet we engage with. What are the most pressing considerations for businesses and, I guess, for wider society around AI?
I think the real risk here is just getting it wrong—just bad AI. That has been the history of AI since the 1950s. As an AI specialist, but even just as a citizen, this is something that concerns me deeply and should concern you and your listeners as well.
But don’t worry—AI isn’t taking over!
In this episode, I’m joined by Claus Dahl, a mathematician and AI expert from Visma, to unpack the real-world impact of AI. From big opportunities and risks to how AI could level the playing field for smaller businesses, we dive into what AI means for the future of work and why it’s not all about robots taking over.
The Shift in AI’s Biggest Applications
Host: Claus, Welcome to the podcast, and thank you very much for joining me. I’ve been looking forward to this conversation to dive a little more into AI and talk with you on that topic. So, thank you very much for being here today.
Guest (Claus, Mathematician & AI Expert at Visma): Thank you so much for asking! It’s always fun to talk about AI.
Host: Excellent! Well, let’s get going then. So, I think the first question for me is: what do you see as the biggest opportunity for AI across accounting, but also maybe even in wider industries today?
Guest: Yeah. I think, um, I’m actually a little bit surprised. Having been in the field for as many years as I have—about eight years now—there’s been an interesting shift.
When I started working in AI, we were very much focused on automating routine tasks because there are a lot of them, and they’re easy to automate. But it turns out that the biggest thing people are using AI for now, the most valuable application, isn’t routine at all—it’s writing software.
Almost every business that employs software developers is turning to AI to lower the cost of software production. That’s actually the leading application right now, which is interesting because the tables have turned on the software industry. Software developers, who were once considered the last people to be automated, are now among the first.
Apart from that, I think search is going to be interesting. Google’s position in the market will be fascinating to watch in the next few years, given that we now have AI tools answering our questions. That’s tremendously important.
Beyond search, which helps us sift through the vast amount of information online, AI is also going to play a huge role in how businesses process information internally. Whether it’s in industries, enterprises, or general knowledge work, managing the flood of information in front of us is going to be one of AI’s biggest applications.
Could AI Challenge Google’s Dominance?
Host: That’s really interesting. Just digging into a couple of those points—search, in particular, is fascinating.
I’m like you, Claus. I remember when the internet, as we know it today, was just starting, and search was quite easy. There were only a few thousand, maybe tens of thousands, of web pages, so it was easier to index and find what you were looking for. Now, we have an internet with billions of websites, pages, and pieces of information.
I wonder if companies like Google, which have had a monopoly for so long, might actually—for the first time—face a genuine competitor. If someone can do AI-powered search really well, could Google be at risk the same way early search providers like Yahoo were?
Guest: Yeah, and it’s interesting that you mention Yahoo because there’s something to be said about that. I actually think Google is in more trouble than Yahoo was.
AI is increasingly generating the internet that we look at, and one of its main uses is to manipulate Google’s algorithms—essentially, to game the system and get to the top of search rankings.
There’s going to be a battle between the AI models interpreting the internet and the AI models generating content for the internet. If you have a pre-AI product, like Google does—though, to be fair, they’ve been using AI for many years—you’re at risk.
And it’s funny that you mentioned Yahoo, which was a curated internet service back in the ’90s. I think there’s a real chance that we’ll move back toward more curated sources simply because the larger internet has become too unreliable, too AI-generated.
So, Google could very well be in trouble. I’m not sure who the winner will be—whether it’s Perplexity or someone else—but it may be a company that focuses more on data curation rather than just ranking AI-generated content.
AI, Ethics, and the Real Risks
Host: That’s a great point. No one knew who Google was—until suddenly, everyone did. They were just a small startup, a scrappy little company that came out of nowhere and quickly dominated the industry.
I guess when people think about AI, their biggest concerns tend to be around ethics and safety. From your perspective, what are the most pressing considerations for businesses and wider society when it comes to AI?
Guest: This is an extremely important question. Of course, as an AI specialist, but even as a citizen, this is something that concerns me deeply, and should concern you and your listeners. And honestly, I find the discourse around it frustrating.
Partly due to the way big AI vendors market themselves, there’s been a lot of focus on “superhuman AI”—the idea that AI will become all-powerful, like in The Terminator. I think that’s overblown, and it distracts from the real risks. I think the real risk is getting it wrong – bad AI – pretty good AI that isn’t really any good.
The real risk isn’t AI taking over. It’s just bad AI.
We need to make sure that people get the right advice and that AI doesn’t make harmful mistakes—whether that’s a small business owner getting incorrect financial advice or a patient receiving the wrong treatment plan because of an AI-generated diagnosis.
The Challenges of Overconfidence in AI
Host: That’s a really interesting point. I think it’s like any new tool. When the internet first started, everyone built a webpage, and there were thousands of really bad ones. I’ll admit, I probably made a few of them myself. But over the years, websites got better, and the tools for creating them improved.
I think you’re probably right. I’m just a little disappointed that you don’t see a future where AI leads to The Terminator scenario, because I’ve already packed my emergency supplies under the sofa—just in case!
Guest: Of course, of course! We all have our go bags at home. But honestly, I think Vladimir Putin is a more immediate and real threat than AI turning into The Terminator!
That said, we should still be interested in the bigger risks AI poses. But I believe overconfidence in AI is a far bigger problem than AI itself. Even when it comes to concerns about superhuman intelligence, the real danger is someone marketing an AI as superhuman when it’s actually flawed.
Here’s a fun fact from AI history: self-driving cars have been promised for a long time. I found a website recently that tracked every time Elon Musk said we’d have self-driving cars “next year.” He said it every year for 10 years—from 2012 to 2022. And yet, we still don’t have them.
There’s this persistent overconfidence in AI—that the future is just around the corner. And I think that’s the key risk, even when we talk about so-called “superhuman” AI.
Host: Yeah, I think you’re right. There are people in the real world who are far scarier than any AI we can imagine.
I never realised that about Elon Musk, though—that he kept saying self-driving cars were coming next year. It’s always “next year,” isn’t it?
Guest: Exactly! That’s been the history of AI since the 1950s: “We’ll be there soon!”
Host: We had a program in the UK called Tomorrow’s World—a tech show for kids that showcased future technology. Someone went back and looked at all their predictions, and while some of the tech did eventually arrive, it was often decades later than predicted.
AI and the Future of Work
Host: Another big concern people have about AI—beyond ethics and security—is its impact on jobs.
AI is often seen as a way to drive efficiency, which usually means reducing the number of people needed to do a job. How do you see AI shaping the future of work? Specifically, in terms of job creation, skill needs, and whether there’s a positive or negative side to this?
Guest: I think, first of all, AI has to bring changes; otherwise, it wouldn’t be interesting. But this isn’t a new conversation—it has been going on since the Industrial Revolution.
A hundred years ago, economist John Maynard Keynes predicted that his grandchildren would only work 15-hour weeks because automation would remove so much labor. And yes, working hours have decreased since the 1910s and 1920s, but we’re still not at 15-hour weeks.
I don’t know about you, but my working weeks are certainly not that short!
So, it’s the same thing with AI. Yes, it will create efficiencies, but people overestimate how quickly AI will replace jobs. The reality is, AI systems don’t run themselves. They need to be built, managed, and directed in a beneficial way.
In fact, we’ll likely see new job categories emerge that don’t exist yet. Take AI evaluation, for example—that’s going to be a real job soon. Someone will need to assess how AI is performing, ensuring accuracy and minimising risks. I haven’t seen a job posting for it yet, but it’s coming.
There will be roles where the job is explicitly to work alongside AI—to steer and supervise it.
At the same time, AI will allow businesses to do things that didn’t make financial sense before. Small businesses, for example, often don’t have finance departments. They might only look at their financial situation a few times a year when an accountant reviews their books.
But with AI automation, we can now provide continuous financial oversight. That allows businesses to optimise in ways that were once only possible for large companies with full finance teams. So, AI can actually level the playing field.
Host: That’s a really good point. It reminds me of when Xero launched in New Zealand and the UK. Everyone said cloud bookkeeping would put people out of work. Then, when receipt-scanning tools like Dext (formerly Receipt Bank) came out, people said the same thing.
But the reality in the accounting industry is that firms still struggle with recruitment. Many accountants are turning away work because they don’t have enough staff.
Guest: Exactly! AI won’t make administrative personnel unemployed. Instead, they’ll just focus on higher-value tasks instead of the repetitive ones AI handles.
The real issue isn’t that AI will take jobs—it’s that professionals who refuse to adapt will make themselves obsolete. If someone refuses to work with bookkeeping technology or receipt-scanning tools, they’ll fall behind.
It’s the same with AI. It’s a pretty safe bet that, whatever job you have, you’ll be working alongside AI in some capacity.
If you refuse to use a computer in your job, you’re at a disadvantage. And I think that’s what’s happening now with AI—it’s becoming an essential tool.
What Businesses Should Consider When Adopting AI
Host: That leads me to a good question. What should companies consider when adopting AI? How can they ensure it adds real value to what they’re doing?
Because right now, we’re at a stage where everyone is just slapping “AI” onto their products. Many of them are just rebranded versions of ChatGPT, rather than something truly innovative.
Guest: I think you almost answered the question yourself—focus on genuine value.
We talk to a lot of teams who are considering AI, and many assume AI is just about chatbots—something you talk to. But most AI is not about conversation.
We aren’t in the conversation business—that’s Facebook. We’re in the automation and efficiency business.
So, companies should focus on what AI can actually do for them. Not how much employees can talk to an AI, but what work the AI can handle for them.
And evaluation is key. Businesses need to measure whether AI is actually improving efficiency or just creating extra complexity.
Favorite AI Products and Tools
Host: That’s great advice. Speaking of AI products, what are some of your favorites? Are there any that make you think, “I wish I had come up with that”?
Guest: As a software professional, I love AI tools that make programming fun again. AI has removed so much of the tedious setup and configuration work, which means I can experiment more freely.
A fun one for me personally is Suno, the AI music generator. It lets you create songs in just a couple of minutes, which is great because I love music.
Final Thoughts: Advice for Businesses Using AI
Host: If you had one piece of advice for businesses adopting AI today, what would it be?
Guest: Let your employees experiment.
Create a culture where people can try new ways of using AI. But at the same time, focus on quality—evaluate AI’s impact carefully.
Host: Claus, thank you very much for your time. It was a pleasure talking to you.
Guest: Thank you!